#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

Validation of the Rainbow Model of Integrated Care Measurement Tools (RMIC-MTs) in renal care for patient and care providers


Authors: Pim P. Valentijn aff001;  Fernando Pereira aff004;  Christina W. Sterner aff005;  Hubertus J. M. Vrijhoef aff001;  Dirk Ruwaard aff002;  Jörgen Hegbrant aff008;  Giovanni F. M. Strippoli aff008
Authors place of work: Department of Patient and Care, Maastricht University Medical Center, Maastricht, The Netherlands aff001;  Department of Health Services Research, Care and Public Health Research Institute (CAPHRI), Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands aff002;  Integrated Care Evaluation, Essenburgh, Hierden, The Netherlands aff003;  Strategy and Health Economics Office, Diaverum, Madrid, Spain aff004;  Strategy and Health Economics Office, Diaverum, Gothenburg, Sweden aff005;  Department of Family Medicine, Vrije Universiteit Brussel, Brussels, Belgium aff006;  Panaxea, Amsterdam, The Netherlands aff007;  Diaverum Medical Scientific Office, Diaverum Sweden AB, Lund, Sweden aff008;  Sydney School of Public Health, The University of Sydney, Sydney, Australia aff009;  Department of Emergency and Organ Transplantation, University of Bari, Bari, Italy aff010
Published in the journal: PLoS ONE 14(9)
Category: Research Article
doi: https://doi.org/10.1371/journal.pone.0222593

Summary

Introduction

Integrated service delivery is considered to be an essential condition for improving the management and health outcomes of people with chronic kidney disease (CKD). However, research on the assessment of integrated care by patients and care providers is hindered by the absence of brief, reliable, and valid measurement tools.

Objective

The aim of this study was to develop survey instruments for healthcare professionals and patients based on the Rainbow Model of Integrated Care (RMIC), and to evaluate their psychometric properties.

Design

The development process was based on the US Food and Drug Administration guidelines. This included item generation from systematic reviews of existing tools and expert opinion on clarity and content validity, involving renal care providers and chronic kidney patients. A cross-sectional, multi-centre design was used to test for internal consistency and construct validity.

Setting

Outpatient clinics in a large renal network.

Participants

A sample of 30.788 CKD patients, and 8.914 renal care providers.

Methods and analysis

Both survey instruments were developed using previous qualitative work and published literature. A multidisciplinary expert panel assessed the face and content validity of both instruments and following a pilot study, the psychometric properties of both instruments were explored. Exploratory factor analysis with principal axis factoring and with promax rotation was used to assess the underlying dimensions of both instruments; Cronbach’s alpha was used to determine the internal constancy reliability.

Results

17.512 patients (response rate: 56.9%) and 8.849 care providers (response rate: 69.5%) responded to the questionnaires. Factor analysis of the patient questionnaire yielded three internally consistent (Cronbach’s alpha > 0.7) factors: person-centeredness, clinical coordination, and professional coordination. Factor analysis of the provider questionnaire produced eight internally consistent (Cronbach’s alpha > 0.7) factors: person-centeredness, community centeredness, clinical coordination, professional coordination, organisational coordination, system coordination, technical and cultural competence. As hypothesised, care coordination patient and providers scores significantly correlated with questions about quality of care, treatment involvement, reported health, clinics’ organisational readiness, and external care coordination capacity.

Conclusion

This study provides evidence for the reliability and validity of the RMIC patient and provider questionnaires as generic tools to assess the experience with or perception of integrated renal care delivery. The instruments are recommended in future applications testing test-retest reliability, convergent and predictive validity, and responsiveness.

Keywords:

Biology and life sciences – Physical sciences – Research and analysis methods – Psychology – Social sciences – Sociology – Mathematics – Medicine and health sciences – statistics – Mathematical and statistical techniques – Statistical methods – Research design – Survey research – Research assessment – Culture – Questionnaires – Factor analysis – Algebra – Linear algebra – Eigenvalues – Nephrology – chronic kidney disease – Medical dialysis – Research validity – Psychometrics

Introduction

The number of people with Chronic Kidney Disease (CKD) across the world is growing due to the rising rates of diabetes mellitus, obesity, hypertension and ageing population [1]. While the number of CKD patients is growing, so are the complexities of their healthcare needs. In fact, 87% of this population has two or more chronic illnesses or multimorbidity [2]. A growing number of CKD patients is in need of health services from multiple providers and units over time, and hence request the coordinated delivery of care. Yet, studies have shown that CKD patients experience multiple obstacles like communication barriers, polypharmacy, delay in referral to a nephrologist, and lack of clear delineation of responsibilities among providers throughout their care treatment trajectory [35]. Reasons for this suboptimal situation are rooted in the complex interplay of clinical, professional, organisational and system factors influencing access and coordination of care. Valid and reliable measurement instruments are needed to assess how these factors influence the process of care coordination, and to pin-point key areas of improvement that need to be enhanced in delivering integrated renal care practice.

Over the past decade, integrated care research has been criticised for lacking both a clear definition and for methodological problems relating its measurement including validity and reliability [6]. Integrated care has shown great overlap with related concepts such as patient-centred care, and coordinated care [7,8], which makes it difficult to measure the concept. In general, coordinated care refers to teamwork between different care providers, and patient-centred care is about involving patients in their own care. However, integrated care is considered an overarching multidimensional concept not only including the patient involvement and collaboration among care providers but also the cross-boundary cooperation between different care organisations [7,8]. The World Health Organization (WHO) has defined integrated care as health services that are managed and delivered in a way so that patients receive a continuum of preventive and curative services according to their needs over time that is coordinated across different levels (e.g. clinical, professional, organizational) of the health system [9]. A guiding influence for this definition was the Rainbow Model of Integrated Care (RMIC) identifying the four core (person-centeredness; service coordination; professional coordination; and organisational coordination) and four ancillary (community-centeredness, technical competence, cultural competence; and system context) integrated care domains [10]. Throughout this article both coordinated care and integrated care are used interchangeable and refer to as ‘integrated care’.

A recent systematic review has shown that measurement instruments assessing integrated care have significant limitations [6]. Firstly, the psychometric properties (e.g. validity, reliability etc.) of most measurement instruments are of low to moderate quality. Secondly, the majority of existing instruments contains scales to assess the person-focused care, clinical and professional coordination domains within intuitional settings, and fail to assess the full range of integrated care domains as described by the RMIC. Finally, instruments with desirable psychometric properties have too many items to be practical for use in routine practice. In sum, there is a lack of brief, reliable, and validated measurement tools for assessing the multidimensional concept of integrated care from the perspectives of patients and care providers.

Based on the RMIC, a literature review and two international Delphi studies were conducted to develop the first version of a measurement tool (MT). The RMIC-MT had 44 items for assessing healthcare providers perceptions of the delivery of integrated care in terms of the eight domains of the RMIC using a four point Likert scale (S1 Table). This preliminary version of the RMIC-MT has been tested in the Netherlands [11], Australia [12], and Singapore [13]. These studies showed that the RMIC-MT provider version was a highly relevant and easy to use instrument with good psychometric properties for the clinical coordination, cultural competence and person-centeredness scales. However, further work was needed to improve the psychometric properties of the professional coordination, organisational coordination, system coordination, technical competence and community-centeredness scales, and respondents advised to develop a RMIC-MT patient version [14]. The aim of this study was to develop the RMIC-MT for CKD patients and renal care providers. And to explore the factor structure and psychometric properties of the RMIC-MT patient and provider version.

Methods

The development of the RMIC-MT patient and provider version was based on the PROM guidelines published by the US Food and Drug Administration [15], and encompassed three stages: 1) generation of items that represent the domains of the RMIC; 2) evaluation of face and content validity, clarity and feasibility; and 3) validation of the scales of the RMIC-MT patient and provider version (Fig 1).

Fig. 1.

Study design.

<h2>Study design.</h2>

* Based on Nurjono et al. (2016) [13] and Valentijn et al. (2015) [19]. ** Based on Bautista et al. (2016) [6], Uijen et al (2012) [16], and search of grey literature [22, 3537].

Instrument development

Item generation

The preliminary version of the RMIC-MT was used to develop an improved patient and provider version [12,13]. The development was undertaken by reviewing two systematic reviews of care coordination and integration questionnaires [6,16], and an additional search of the grey literature. One researcher (PV) reviewed all questionnaires, and generated an item pool of clinician-reported and patient-reported measures that represented the domains of the RMIC.

Item evaluation

Two researchers (PV, FAP) independently reviewed the item pool of the RMIC-MT patient and provider version using a standardized evaluation form. Any discrepancies were resolved through iteration and discussion. Items were considered eligible if: 1) The content of an item reflected the core and ancillary domains of the RMIC for the provider version, and only the core domains for the patient version; 2) items provide evidence for its reliability, validity and responsiveness based on the COSMIN scores reported by Uijen, et al. (2012) [16] and Bautista, et al (2016) [6]; 3) items have a minimal user burden (i.e., ≤ 9 question items and simple response categories) for patient and care providers to complete.

Instrument evaluation

Assessment of face and content validity

An expert panel of seventeen persons in the UK was convened to assess the face and content validity of the RMIC-MT patient and provider version. This panel was multidisciplinary and represented the following stakeholder groups: 1) practitioners (e.g. nephrologist, nurse, or dietician); 2) managers (e.g. clinic managers, or human resource directors), and 3) service users (e.g. people with end-stage kidney disease and treated with haemodialysis). Each panel member was asked to review both instruments independently using the following criteria: 1) the clarity of the questions and instruction texts (yes or no); 2) the redundancy of the questions included (yes or no); and 3) the relevance of the questions included to measure integrated care on a four-point Likert scale ranging from (1) not relevant to (4) highly relevant. A space for comments for each question was provided and members were also asked to review the demographic questions. The first three criteria (i.e. clarity, redundancy, and relevance) were used to assess the face validity of both the RMIC-MT patient and provider version. The fourth criterion (relevance) was used to assess the content validity of both instruments. Based on the relevance score of each item, the Item Content Validity Index (I-CVI) was calculated. The I-CVI is the proportion of each individual item that received a rating of 3 or 4 by the expert panel. For each scale, a Scale Content Validity Index (S-CVI Ave) was calculated. This is the average of all the I-CVI’s of the individual items. An I-CVI of 0.78 and an S-CVI of 0.90 is considered to be excellent [17]. Based on these criteria, the final RMIC-MT patient and provider version were produced.

Testing for clarity and feasibility for use with patients

The RMIC-MT patient version was tested using a sample of 53 CKD patients selected from a collaborative network of 4 dialysis clinics in the UK. Patients were eligible to participate if they were aged over 18 years, had end-stage kidney disease, able to communicate in English, and consented to participate. The instrument was administered by a research assistant using a tablet computer. Data collection took place in June 2017. A standardized feedback form was used as a further evaluation of the RMIC-MT patient version for length, clarity, and presence of distressing questions [18]. In addition, patients were asked to identify which question, if any, they found difficult to answer, and if any of the questions had concerned or upset them. Space was provided for additional comments.

Translating into required languages

The English version of the RMIC-MT patient and provider version were translated to Spanish, Arabic, French, German, Hungarian, Italian, Kazakh, Lithuanian, Polish, Portuguese, Romanian, Russian, and Swedish by a translation company with the relevant linguistic background. The translated versions of the RMIC-MT’s were independently reviewed against the original English version by a country manager or lead physician and minor revisions were made to adapt to the local language and culture. No points of misunderstanding were detected. Subsequently, the translated versions of the RMIC-MT patient and provider version were administrated for the study.

Instrument validation

Design

A cross-sectional study design including a convenience sample of 8.421 renal care providers (e.g. nephrologists, nurses, and management) and 30.788 CKD patients within an international collaborative network of 316 dialysis clinics in 19 countries (e.g. Argentina, Australia, Chile, France, Germany, Global, Hungary, Italy, Kazakhstan, Lithuania, New Zealand, Poland, Portugal, Romania, Russia, Saudi Arabia, Spain, Sweden, UK, and Uruguay) was used for the validation of the RMIC-MT patient and provider version. Participating clinics received a written information package consisting of an introduction letter and patient information sheet to inform the clinic managers, care providers and patients about the study’s purpose and data collection methods. The RMIC-MT’s were distributed to all clinic managers, care providers and patients in each of the participating site between 25 September-13 November 2017. All participants were asked written informed consent before enrolment in the study procedure. The RMIC-MT’s were completed online using a web-based survey platform. A forced answering procedure (i.e. respondents had to answer each question before they were allowed to proceed to the next question) was used to prevent missing answers [19]. Via clinic-specific codes assigned to each questionnaire, the response rate per clinic was checked and reported back to each dialysis clinic once a week during the data collection period.

Study population

In the participating clinics, care providers (i.e. nephrologist, nurse, psychologist, dietician, clinic managers) were considered eligible if they met the following criteria: 1) were actively involved in the clinical and/or administrative process of the dialysis clinic; and 2) worked at least one full month for at least 8 hours/week on their site. Care providers were excluded if they were unable or unwilling to provide informed consent. Patients were considered eligible to participate in the study if they met the following criteria: 1) aged 18 years or older; 2) end-stage kidney disease treated with haemodialysis for 90 days or longer. Patients were excluded if they: 1) were unable or unwilling to provide informed consent; 2) were unable to complete an online questionnaire even with help of carers; and 3) had a life expectancy less then 12 months.

Sample size calculation

The estimated minimal sample was based on the requirement of 10 subjects per item within each RMIC-MT questionnaire [20]. Given that the RMIC-MT provider questionnaire had 48 items and the patient questionnaire 24 items, the required sample size was 480 and 240. Thus, the study included a representative sample bigger than that recommend for the statistical analysis.

Study variables RMIC-MT provider version

The provider version assessed how renal care providers perceived the clinics ability to deliver integrated care on a five-point Likert scale (i.e. never, rarely, sometimes, often, always) on 48 items: person-centeredness (e.g. needs assessment), community-centeredness (e.g. population screening), service coordination (e.g. personal care plan), professional coordination (e.g. multidisciplinary team), organisational coordination (e.g. inter-organisational partnerships), system coordination (e.g. policy and financing), technical competence (e.g. interoperable medical records), and cultural competence (e.g. collaboration culture) [21]. Care providers were also asked to rate the overall perceived ability to coordinate care internally and externally on a 10-point scale ranging from very poor (1) to excellent (10) [19]. In addition, the adaptive reserve of the clinic was assessed using the resource and culture subscale on a five-point Likert scale derived from the work of Helfrich et al. [22]. Finally, data on type of profession were collected.

Study variables RMIC-MT patient version

The patient version assessed how patients experienced the integration of care on a five-point Likert scale (i.e. never, rarely, sometimes, often, always) with 24 items: person-centeredness (e.g. needs assessment), service coordination (e.g. personal care plan), professional coordination (e.g. multidisciplinary team), and organisational coordination (e.g. inter-organisational partnerships) [21]. Patients were also asked which care providers they had visited outside the dialysis clinics and how they perceived the cooperation with the dialysis clinic on a five-point Liker scale ranging from poor (1) to very good (5). In addition, patients were asked to rate the overall perceived coordination, quality and involvement of care on a 10-point scale ranging from very poor (1) to excellent (10) [19]. Finally, the following socio-demographic data were collected: age; gender; marital status; work status; and health status on a five-point Likert scale ranging from very poor (1) to very good (5) [23].

Statistical analysis

Data were entered, cleaned and checked before the analysis. Continues variables were expressed as mean and standard deviations. Frequencies and percentages were used for categorical variables. Distribution properties of responses to the RMIC-MT items were used to study the psychometric sensitivity. Items with a skewness (Sk) values > 3 and kurtoris (Ku) > 7, were considered to have psychometric sensitivity issues [24]. Items with a floor or ceiling effect of > 75% respondents were considered problematic and deleted [25].

Exploratory factor analysis with principal axis factoring extraction method and promax (oblique) rotation were used to assess the underlying structure of the RMIC-MT patient and provider questionnaires [26]. Barlett’s test of sphericity and Kaiser-Meyer-Olkin (KMO) measurement of sampling adequacy were used to determine if the requirements for a factor analysis were met [27]. The number of factors to consider were determined by considering the eigenvalues (>1), scree plot, and interpretability of the factor. More importantly, the factors retained had to be guided theoretically [26]. Names were given for each identified factor based on the domains of the RMIC. Items that crossed loaded on more than 1 factor were placed with the factor that was most closely related conceptually. Items with poor factor loadings (< 0.6) were removed from the final questionnaires [26]. Additionally, a structural equation model with maximum likelihood was used to evaluate the explorative factor analysis model fit by using the standard fit indices: root-mean-square error of approximation RMSEA (≤0.06, 90% CI ≤0.06), standardized root-mean-square residual (SRMR) (≤0.08), comparative fit index (CFI) (≥0.95), Tucker-Lewis index (TLI) (≥0.95), and the chi-square/df ratio less then 3 [26].

The internal consistency was assessed using item-total correlations and Cronbach’s alpha. Item-total correlations assess the overall correlation between items within a scale, and should be ≥ 0.4. A Cronbach’s alpha of ≥ 0.70 was considered acceptable for a scale to be sufficiently reliable [28,29]. Pearson correlation coefficients (r) were calculated to assess whether each item was in the right subscale by correlating items with the subscale means. Items that correlated more highly on subscales other than the one to which it was assigned were eliminated [30,31].

Construct validity was assessed by calculating Pearson’s correlations between the integrated care scale scores and two overall perceived coordination questions within the RMIC-MT patient and provider version. Moderately positive associations (≥0.4) between integrated care and these correlates would indicate good construct validity [32]. For patients, the following hypotheses were tested based on previous studies [33]: 1) patients who have a better coordinated care experience are more satisfied with (a) quality of care, and (b) treatment involvement; 2) each instruments subscale aims to measure coordinated care experience and are therefore positively and significantly correlated with other subscales; and 3) patients who report their health as good are more positive regarding the care coordination experience than patients who report their health as poor. For care providers, the following hypotheses were tested: 1) care providers who indicate a better care coordination ability are more satisfied with (a) the internal adaptive reserve, and (b) external care coordination ability; and 2) each instruments subscale aims to measure the clinics coordinated care ability and are therefore positively and significantly correlated with other subscales. P-values <0.05 were considered statistically significant. All statistical analyses were done using SPSS version 23.0 (IBM SPSS Statistics, 2015), and AMOS statistical package version 23.

Ethics

Ethical approval for this study was waived by the Independent Review Board Nijmegen (IRBN) [34] because the study was considered noninterventional. The committee concluded that this study was conducted in accordance with the ethical principles that have their origin in the Declaration of Helsinki and that are consistent with good clinical practice (GCP). Written informed consent was obtained from each participant before collecting data.

Results

Instrument development

Based on two systematic reviews [6,16] and four additional publications [22,3537], we identified 234 integrated care instruments, of which 58 were considered potentially eligible. Of these 58 instruments, we used the items of six instruments [25,35,3740] to improve the professional, organisational, system and technical competence scales of the existing RMIC-MT provider version (73 items), and the items of four instruments [31,4143] were selected to construct RMIC-MT patient version (24 items). After one revision round the RMIC-MT provider version consisted of 50 items grouped into eight domains, and the RMIC-MT patient version consisted of 28 items grouped into four domains.

Instrument evaluation

Face and content validity

Six practitioners, five managers, and one CKD patient participated (response rate = 71%) to review the RMIC-MT provider version, and four practitioners, three managers, and two CKD patients (response rate = 53%) to review the RMIC-MT patients’ version. The face validity scores of both the RMIC-MT patient and provider version are tabulated in Table 1. The most overlapping or redundant questions were found for the professional coordination scale for both the patient and provider version. In addition, the participants considered the cultural competence scale of the provider version redundant. Based on the qualitative comments made by the participants, items of the community centeredness, system coordination, and technical competence scales of the provider version were revised. In addition, questions of clinical coordination and professional coordination scales of the patient version were changed based on the qualitative comments made by the participants. Twenty-four of the 25 items of the RMIC-MT patient version had an excellent content validity (I-CVI ≥ 0.78), and one item had a fair content validity (I-CVI < 0.78). The average scale content validity (S-CVI Ave) for the RMIC-MT provider version ranged from 0.68 for the organisational integration scale and 0.97 for the person-focused care scale (Table 1). The S-CVI Ave for the entire RMIC-MT provider version was 0.85. The average scale content validity (S-CVI Ave) for the RMIC-MT patient version ranged from 0.75 for the organisational coordination scale and 0.96 for the person-centeredness scale (Table 1). The S-CVI Ave for the entire consumer questionnaire was 0.87.

Tab. 1.

Face and content validity RMIC-MT patient and provider version.

<h2>Face and content validity RMIC-MT patient and provider version.</h2>

Clarity and feasibility

Fifty-three CKD patients participated in the pilot study using the RMIC-MT patient version.

The average age of the patients in the sample was 67 years (SD 15) with 55% male and 45% female. The majority of the patients were retired (72%) and their care was coordinated by their nephrologist (89%) (S2 Table). The mean completion time for the RMIC-MT patient was 9 minutes (SD: 2.7), with a range of 5 to 20 minutes. Overall, 13% (n = 7) of patients had help completing the questionnaire (S3 Table). In addition, only one patient considered one question difficult, and none of the patients considered the questions upsetting. Based on the results of the feasibility study, it was decided that no changes to the instrument were indicated.

Instrument validation

Data collection

A total of 17.512 CKD patients (56.9% response rate) and 5.849 care providers (69.5% response rate) completed the questionnaires. There were no missing values in the data because all items were set as required (see details in method section). The mean age of the patients was 61.9 (SD: 15.5, range 4–118) years. A small majority of the patients were male (51.9%, n = 9084) and retired (52.3%, n = 9164). The majority of care providers were represented by nurses (54.4%, n = 3179). The demographic and characteristics of the participants are listed in Table 2.

Tab. 2.

Characteristics of the study participants.

<h2>Characteristics of the study participants.</h2>

Item score distribution

No items of the RMIC-MT patient and provider version presented severe floor, ceiling, Sk and Ku values, which indicates the adequate psychometric sensitivity of the items. S4 Table presents the summary measures of the items of the RMIC-MT patient and provider version.

Factor analysis

The KMO test of sample adequacy (0.97) and the Barlett’s test of sphericity (p < 0.0001) indicated that a factor analysis for the RMIC-MT patient version was appropriate. The analysis yielded two factors with eigenvalues > 1, which accounted for 51.2% of the variance. And two factors with eigenvalues < 1, accounted for 5.8% of the variance were also included because they were interpretable based on the RMIC. Hence a four-factor solution was chosen. Factor 1 was named ‘clinical coordination’ (6 items, 46.6% of variance), factor 2 ‘professional coordination’ (6 items, 4.6% variance), factor 3 ‘organisational coordination’ (6 items, 3.5% variance), and factor 4 ‘person-centeredness’ (2 items, 2.3% variance). In this solution, 8 items were omitted (i.e. item 8, 7, 11, 12, 13, 15, 22, and 19) because the factor loadings were below < 0.6, see Table 3. Regarding model fit (4-factors, 16 items), the following tests of significance and goodness-of-fit measures were obtained: x2 (98) = 5587.7; p <0.0001; x2/df = 57 | CFI = .97| TLI = 0.97| RMSEA (HI90) = .057(0.058) / SRMR = .033. Thus, the 4-factor explorative factor model showed an acceptable fit, except for the chi-square/df ratio.

Tab. 3.

Factor analysis RMIC-MT patient version (n = 17,512).

<h2>Factor analysis RMIC-MT patient version (n = 17,512).</h2>

In addition, the KMO test of sampling adequacy (0.95) and Barlett’s test of sphericity (p < 0.0001) indicated that the criteria for a factor analysis were also met for the RMIC-MT provider version. A nine-factor solution was obtained with a total of 48 items, which explained 57.6% of the variance. The analysis yielded six factors with eigenvalues > 1, which accounted for 53.2% of the variance. And three factors with eigenvalues < 1, accounted for 4,4% of the variance were also included because they were interpretable based on the RMIC.

Hence, a nine-factor solution was obtained. Factor 1 was named ‘cultural competence’ (8 items, 29.7% of variance), factor 2 ‘person-centeredness’ (5 items, 6.7% of variance), factor 3 ‘technical competence’ (5 items, 6.3% of variance), factor 4 ‘professional coordination’ (5 items, 5.2% of variance), factor 5 ‘clinical coordination’ (7 items, 3.0% of variance), factor 6 ‘Triple Aim’ (5 items, 2.3% of variance), factor 7 ‘clinical coordination’ (6 items, 1.7% of variance), factor 8 ‘system coordination’ (3 items, 1.4% of variance) and factor 9 ‘community centeredness’ (4 items, 1.3% of variance). In this solution, 12 items were omitted (i.e. item 47, 46, 48, 42, 36, 13, 14, 16, 15, 24, 21, and 2) because the factor loadings were below < 0.6, see Table 4. Regarding model fit (9-factors, 36 items), the following tests of significance and goodness-of-fit measures were obtained: x2 (558) = 6100.5.7; p <0.0001; x2/df = 10.9 | CFI = .96| TLI = 0.96| RMSEA (HI90) = .041(0,041) / SRMR = .026. In short, the 9-factor model showed an acceptable fit, except for the chi-square/df ratio.

Tab. 4.

Factor analysis RMIC-MT provider version (n = 6,052).

<h2>Factor analysis RMIC-MT provider version (n = 6,052).</h2>

Internal consistency

Internal consistency analysis showed that reliability assumptions were adequately met for all four scales of the RMIC-MT patient version. Item-total correlations exceeded 0.4 for all items, see S5 Table. Cronbach’s alpha ranged from 0.84 for the organisational coordination scale to 0.93 for the clinical coordination scale, see Table 5. Inspection of the correlation matrices revealed that all individual items highly correlated with their respective subscale then with the competing scales. Thus, the final RMIC-MT patient version is a reliable scale (alpha 0.94) comprising 16 patient experience items.

Tab. 5.

Descriptive statistics and internal consistency of the RMIC-MT patient and provider version.

<h2>Descriptive statistics and internal consistency of the RMIC-MT patient and provider version.</h2>

The internal consistency analysis also showed that the nine domains of the RMIC-MT provider version could be reliably measured. Item-total correlations exceeded 0.4 for all items, see S6 Table. The Cronbach’s alpha for the RMCI-MT provider version ranged from 0.90 for the system coordination, cultural competence, person-centeredness, and Triple Aim scale to 0.84 for the technical competence scale, see Table 5. Item scale correlations showed that all individual items highly correlated with their respective subscale compared with the competing scales. Hence, the RMIC-MT provider version is also a reliable instrument (alpha 0.94) of 36 items to measure integrated care.

Construct validity

All patient related hypotheses could be accepted, which confirms the construct validity of the RMIC-MT patient version. Patients who experienced better care coordination were more satisfied with the quality (r = 0.51, P <0.01) and treatment involvement (r = 0.40, P < 0.01), see Table 6. All subscales of the RMIC-MT patient version were strongly and significantly (P < 0.01) correlated with each other (r = 0.46–0.89). Finally, patients in poorer health experienced significantly (P < 0.01) poorer care coordination (r = 0.22).

Tab. 6.

Correlation between scale scores RMIC-MT patient version (n = 13,191).

<h2>Correlation between scale scores RMIC-MT patient version (n = 13,191).</h2>

In addition, care providers who indicated a better care coordination ability were more satisfied about the adaptive reserve (r = 0.71, P <0.01) and external care coordination capacity (r = 0.59, P <0.01) of their clinic, see Table 7. All subscales of the RMIC-MT patient version were positively and significantly (P < 0.01) correlated with each other (r = 0.09–0.66). The strongest correlations were observed between ‘community-centeredness’ and ‘person-centeredness’ (r = 0.66, P <0.01), ‘system coordination’ and ‘organisational coordination’ (r = 0.63, P <0.01), and ‘triple aim’ and ‘clinical coordination’ (r = 0.62, P<0.01). Especially the perceived professional care coordination ability has small levels of correlation with the other scale scores (r = 0.09–0,24). Again, all provider related hypotheses could be accepted, which confirms the construct validity of the RMIC-MT provider version.

Tab. 7.

Correlation between scale scores RMIC-MT provider version (n = 5,849).

<h2>Correlation between scale scores RMIC-MT provider version (n = 5,849).</h2>

Discussion

Principle findings

The RMIC-MT patient and provider version showed excellent face and content validity using an expert panel. The clarity and feasibility of the RMIC-MT patient was assessed by 53 CKD patients in a pilot study. Patients indicated that the survey instrument was easily understood and completed. The factor structure and psychometric properties of the RMIC-MT patient and provider survey instruments were tested in a cohort of 30,788 CKD patients and 8,421 renal care providers. Statistical analysis indicated that the internal consistency, reliability, and construct validity for both the RMIC-MT patient (16 items, 4 subscales) and provider version (36 items, 9 subscales) were good. The proposed models also passed the majority of goodness-to-fit test by using structural equation modelling. This suggests that the RMIC-MT is a valuable psychometric tool for evaluating care coordination as perceived by patients and care providers. Given the small number of items the utility is high to assess care coordination in routine practice.

Comparison with other studies

The factor analysis of the RMIC-MT patient version leads us to conclude that CKD patients do differentiate between distinct, complementary domains of integrated care (i.e. person-centred care, clinical coordination, professional coordination, and organisational coordination) as described by the RMIC [10]. Yet, the organisational coordination, system coordination and community-centeredness domains did not met the eigenvalue criteria of >1 within the RMIC-MT provider version. Possibly, this multi-dimensionality was not pronounced because the organisational coordination, system coordination and community-centeredness activities are not closely related to the clinical encounter of the dialysis clinics. In addition, previous studies have also shown that care providers find it difficult to differentiate between the organisational and system domains of integrated care [44].

Most of the variance of the RMIC-MT patient version was explained by the clinical coordination domain, which has also been accentuated by other measurement tools [6]. The organisational coordination, and person-centeredness domain of the RMIC-MT patient version did not meet the eigenvalue criteria of >1. This might be due to the fact that there are (cultural) differences between the organisational and person-centeredness experiences across countries and dialysis clinics. The majority of the items hypothesised to belong to the person-centeredness domain were absorbed by the clinical coordination or did not meet the inclusion criteria. It is noteworthy that the clinical coordination domain taps on aspects related to knowing and respecting the patients’ values, which is a critical aspect to tailor the care coordination process [45]. Studies have shown that physician recognition and advance knowledge of patients’ needs can have a positive effect on patient outcomes [46].

The RMIC-MT provider version yielded nine domains reflecting all the hypothesised domains of the RMIC (i.e. person-centeredness, community centeredness, clinical coordination, professional coordination, organisational coordination, system coordination, technical, cultural competence, and Triple Aim outcomes). The majority of care coordination tools that have been developed are restricted, and only measure aspects of the clinical and professional care coordination process [6,16]. Most of the variance of RMIC-MT provider version was explained by cultural competence. This finding indicates the importance of normative trust mechanisms in the care coordination process, which has not been accentuated in previous care coordination tools [6,16]. The person-centeredness and clinical coordination domain highlights the fact that delivering integrated care is a participatory process of co-creation between care providers and patients [47].

Compared to the RMIC-MT patient version, the clinical coordination domain of the RMIC-MT provider version explained a relatively small proportion of the variance. This might suggest that care providers consider the enabling ‘backstage process’ of integrated care delivery (e.g. cultural and technical competence) more important than the domains more closely related to the clinical encounter (e.g. clinical and professional coordination) [44]. Finally, the community-centeredness domain highlights the importance of population-orientation as a guiding principle of integrated care, which has not been considered as an aspect of integrated care in previous questionnaires [6,16].

Strengths and limitations

The strength of the RMIC-MT patient and provider version is its thoroughly development and validation process. Face and content validity are supported by using a conceptual model, results from previous validation studies, literature reviews, and multidisciplinary expert panel in the development of the RMIC-MTs questionnaires. Study results showed that both instruments had excellent face and content validity (S-CVI avg > 0.78). Usability of the RMIC-MT patient version was carefully pre-tested following the method as described by Aaronson et al. (2011) [18]. Factor analysis and good levels of internal consistencies (Cronbach alpha > 0.70) of the RMIC-MT patient and provider version subscales provide evidence of reliable and valid questionnaires. Especially the RMIC-MT patient version has good evidence for construct validity with the patient related hypotheses largely being met. The significant relationship between patient care coordination scores and quality, treatment involvement, and health follows previous findings [33,4854]. To further establish construct validity in future research, it would be interesting to test whether patient care coordination scores correlate with provider care coordination scores at dialysis clinic level. With the exception of the professional coordination and technical competence subscales, the RMIC-MT provider version had also good evidence for construct validity. In the comparison of scale scores, those measuring the organisational aspects of integration (i.e. organisational and Triple Aim) had the highest levels of correlation. We found support that for the hypotheses that a clinics adaptive reserve is a prerequisite for a better care coordination process, which is an important finding [22]. The sample size and response rates of the participants were high, which strengthens our results for the CKD population and general applicability of the RMIC-MTs. Since the RMIC-MTs were developed using a non-disease specific approach, the instruments can probably be used in other settings and other patient groups, although applicability should be assessed before using it in other settings.

However, there are a number of study limitations. First, the limited number of CKD patients participating in the expert panel is cause of concern for the content validity of the RMIC-MT patient version. Future studies should explore in more details the content validity of the RMIC-MT patient version among a larger group of patients using the I-CVI. Second, we did not use a back-translation process for ensuring linguistical validation. Although all translations were independently reviewed by bilingual experts, future research is needed on the conceptual and cultural equivalence of non-English versions of the RMIC-MT patient and provider versions. Third, while validity of the RMIC-MTs was addressed in the current study, more research is needed to assess the test-retest reliability, responsiveness to change, and construct validity against external criteria (e.g. satisfaction, quality of care, access of care) [55]. Testing the reliability (test-retest), responsiveness and construct validity of the RMIC-MTs is already planned for using a longitudinal evaluation design. Future studies should also explore how the RMIC-MTs scale scores relate to relevant patient-reported outcomes, thus establishing convergent and predictive validity. In addition, future studies should assess the discriminant validity of the RMIC-MTs, as it is likely that the care coordination process differs between clinics and care systems [56]. A fourth limitation is that the sample was chosen by convenience which may not be representative of the general CKD population and renal care providers. Although various healthcare professionals were included, the majority were nurses and nephrologists. In addition, only stage 4–5 CKD patients were included. Furthermore, the entire sample was obtained from a large renal care network within 19 countries. Hence, application of the findings to all CKD patients and renal care providers within the countries and beyond is limited. Fifth, the present study found that care providers were more critical about the clinics integrated care ability as compared with the overall patient integrated care experience. This raises the question whether the patient scores accurately reflect a very positive integrated care experience or a measurement variability limitation. To address this concern, we used an agree-to-disagree Likert scale, which is considered to generate the greatest variability in patient-based measures [57]. In addition, several scholars have shown that patients with a threatening chronic disease like CKD are reluctant to criticise their physician in terms of delivering fragmented care [5860]. This theory requires further research regarding the discriminant validity of the RMIC-MT patient version between countries and dialysis clinics. Finally, the scales of the RMIC-MTs are meant to standardize the measurement of integrated care across different settings as patient groups. Accordingly, it is worth cross-validating the results of the present study in different patient groups and settings.

Implications for practice

The RMIC-MT’s are valuable instruments to assess the care coordination process and can accordingly be adopted for improving the integrated delivery of renal care. Both instruments can be easily administered to care providers and patients, taking respectively 10 to 20 minutes to complete. The RMIC-MT’s can be used as a pre/post intervention measure by policymakers and commissioners to assess the impact of an integrated care programme, and by health administrators as an ongoing performance assessment tool to focus on key aspects of integrated service delivery across teams and organisations. For example, tailored information on dialysis clinics performance gives insight into the clinic’s care coordination strengths and weaknesses through their patients and care providers eyes. Evidence has shown that this internal feedback appears to be an incentive for quality improvement [61,62]. In addition, the RMIC-MT’s can be used for benchmark purposes, by distinguishing ‘strong’ from ‘weak’ performing dialysis clinics on care coordination experience and ability. Hence, we see several potential applications of the RMIC-MT’s in research, performance assessment, continuing education, and evaluation.

Conclusion

In conclusion, this study provides evidence for the factor structure and psychometric properties of the RMIC patient and provider questionnaires as generic tools to measure integrated renal care. Both instruments serve as a useful tool for assessing integrated care through the eyes of CKD patients and renal care providers. The instruments are recommended in future applications testing test-retest reliability, convergent and predictive validity, and responsiveness.

Supporting information

S1 Table [docx]
Original RMIC-MT for care providers (44 items).

S2 Table [docx]
Demographic characteristics of the CKD patients participating in pilot study (n = 53).

S3 Table [docx]
Clarity and feasibility of the RMIC-MT patient version (n = 53).

S4 Table [docx]
Summary measures of the items of the RMIC-MT patient and provider version.

S5 Table [docx]
Descriptive statistics and internal consistency RMIC-MT patient version.

S6 Table [docx]
Descriptive statistics and internal consistency RMIC-MT provider version.


Zdroje

1. Hill NR, Fatoba ST, Oke JL, Hirst JA, O'Callaghan CA, Lasserson DS, et al. Global Prevalence of Chronic Kidney Disease—A Systematic Review and Meta-Analysis. PLoS One. 2016;11: e0158765. doi: 10.1371/journal.pone.0158765 27383068

2. Stevens LA, Li S, Wang C, Huang C, Becker BN, Bomback AS, et al. Prevalence of CKD and comorbid illness in elderly patients in the United States: results from the Kidney Early Evaluation Program (KEEP). Am J Kidney Dis. 2010;55: S23–33. doi: 10.1053/j.ajkd.2009.09.035 20172445

3. Valentijn PP, Biermann C, Bruijnzeels MA. Value-based integrated (renal) care: setting a development agenda for research and implementation strategies. BMC Health Serv Res. 2016;16: 330-016–1586-0.

4. Kang H, Nembhard HB, Curry W, Ghahramani N, Hwang W. A systems thinking approach to prospective planning of interventions for chronic kidney disease care. Health Systems. 2016.

5. Fishbane S, Hazzan AD, Halinski C, Mathew AT. Challenges and opportunities in late-stage chronic kidney disease. Clin Kidney J. 2015;8: 54–60. doi: 10.1093/ckj/sfu128 25713711

6. Bautista MA, Nurjono M, Lim YW, Dessers E, Vrijhoef HJ. Instruments Measuring Integrated Care: A Systematic Review of Measurement Properties. Milbank Q. 2016;94: 862–917. doi: 10.1111/1468-0009.12233 27995711

7. Uijen AA, Schers HJ, Schellevis FG, van den Bosch WJ. How unique is continuity of care? A review of continuity and related concepts. Fam Pract. 2012;29: 264–271. doi: 10.1093/fampra/cmr104 22045931

8. Kodner DL. All together now: a conceptual exploration of integrated care. Healthc Q. 2009;13 (Special Issue): 6–15. 20057243

9. World Health Organization (WHO). WHO global strategy on integrated people-centred health services 2016–2026. Placing people and communities at the centre of health services. 2015.

10. Valentijn PP, Schepman SM, Opheij W, Bruijnzeels MA. Understanding integrated care: a comprehensive conceptual framework based on the integrative functions of primary care. Int J Integr Care. 2013;13: e010. doi: 10.5334/ijic.886 23687482

11. Boesveld I, Valentijn P, Hitzert M, Hermus M, Franx A, de Vries R, et al. An Approach to measuring Integrated Care within a Maternity Care System: Experiences from the Maternity Care Network Study and the Dutch Birth Centre Study. International journal of integrated care. 2017;17.

12. Angus L, Valentijn PP. From micro to macro: assessing implementation of integrated care in Australia. Aust J Prim Health. 2017.

13. Nurjono M, Bautista MAC, Dessers E, Lim YW, Vrijhoef HJM. Measurement of Integrated Care on the level of Regional Health System in Singapore. Int J Integr Care. 2014;14.

14. Valentijn P, Angus L, Boesveld I, Nurjono M, Ruwaard D, Vrijhoef H. Validating the Rainbow Model of Integrated Care Measurement Tool: results from three pilot studies in the Netherlands, Singapore and Australia. International Journal of Integrated Care. 2017;17. doi: 10.5334/ijic.3104

15. US Food and Drug Administration. Guidance for Industry. Patient-Reported Outcome Measures: Use in Medical Product Development to Support Labeling Claims. 2009.

16. Uijen AA, Heinst CW, Schellevis FG, van den Bosch WJ, van de Laar FA, Terwee CB, et al. Measurement properties of questionnaires measuring continuity of care: a systematic review. PLoS One. 2012;7: e42256. doi: 10.1371/journal.pone.0042256 22860100

17. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30: 459–467. doi: 10.1002/nur.20199 17654487

18. Johnson C, Aaronson N, Blazeby J, Bottomley A, Fayers P, Koller M, et al. Guidelines for developing questionnaire modules. 2011: 1–46.

19. Valentijn PP, Vrijhoef HJ, Ruwaard D, de Bont A, Arends RY, Bruijnzeels MA. Exploring the success of an integrated primary care partnership: a longitudinal study of collaboration processes. BMC Health Services Research. 2015;15: 32. doi: 10.1186/s12913-014-0634-x 25609186

20. Costello AB, Osborne JW. Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical assessment, research & evaluation. 2005;10: 1–9.

21. Essenburgh Research & Consultancy. Rainbow Model of Integrated Care Measurement Tools (RMIC-MT's) for Patient and Care providers. https://www.essenburgh.com/the-rainbow-model-measurements-tools-for-integrated-care. 2019.

22. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4: 38-5908-4-38. doi: 10.1186/1748-5908-4-38 19594942

23. Stewart AL. Measuring functioning and well-being: the medical outcomes study approach: duke university Press; 1992.

24. Kline RB. Principles and practice of structural equation modeling: Guilford publications; 2015.

25. Zwart DL, Langelaan M, van de Vooren RC, Kuyvenhoven MM, Kalkman CJ, Verheij TJ, et al. Patient safety culture measurement in general practice. Clinimetric properties of 'SCOPE'. BMC Fam Pract. 2011;12: 117-2296-12-117.

26. Matsunaga M. How to Factor-Analyze Your Data Right: Do's, Don'ts, and How-To's. International journal of psychological research. 2010;3: 97–110.

27. Cerny BA, Kaiser HF. A Study Of A Measure Of Sampling Adequacy For Factor-Analytic Correlation Matrices. Multivariate Behav Res. 1977;12: 43–47. doi: 10.1207/s15327906mbr1201_3 26804143

28. Pallant J. SPSS survival manual: McGraw-Hill Education (UK); 2013.

29. Hair JF, Babin Barry J, Anderson RE, Tatham P L. Multivariate data analysis. Upper Saddle River, NJ: Pearson Prentice Hall; 2005.

30. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use: Oxford University Press, USA; 2015.

31. van Empel IW, Aarts JW, Cohlen BJ, Huppelschoten DA, Laven JS, Nelen WL, et al. Measuring patient-centredness, the neglected outcome in fertility care: a random multicentre validation study. Hum Reprod. 2010;25: 2516–2526. doi: 10.1093/humrep/deq219 20719811

32. Field A. Discovering statistics using SPSS. Third Edition ed. London: Sage; 2009.

33. Valentijn PP, Pereira FA, Ruospo M, Palmer SC, Hegbrant J, Sterner CW, et al. Person-Centered Integrated Care for Chronic Kidney Disease: A Systematic Review and Meta-Analysis of Randomized Controlled Trials. Clin J Am Soc Nephrol. 2018.

34. Independent Review Board Nijmegen (IRBN). 2015.

35. Orchard CA, King GA, Khalili H, Bezzina MB. Assessment of Interprofessional Team Collaboration Scale (AITCS): development and testing of the instrument. J Contin Educ Health Prof. 2012;32: 58–67. doi: 10.1002/chp.21123 22447712

36. Schroder C, Medves J, Paterson M, Byrnes V, Chapman C, O'Riordan A, et al. Development and pilot testing of the collaborative practice assessment tool. J Interprof Care. 2011;25: 189–195. doi: 10.3109/13561820.2010.532620 21182434

37. Norris J, Carpenter JG, Eaton J, Guo JW, Lassche M, Pett MA, et al. The Development and Validation of the Interprofessional Attitudes Scale: Assessing the Interprofessional Attitudes of Students in the Health Professions. Acad Med. 2015;90: 1394–1400. doi: 10.1097/ACM.0000000000000764 25993280

38. Vanhaecht K, De Witte K, Depreitere R, Van Zelm R, De Bleser L, Proost K, et al. Development and validation of a care process self-evaluation tool. Health Serv Manage Res. 2007;20: 189–202. doi: 10.1258/095148407781395964 17683658

39. Starfield B, Cassady C, Nanda J, Forrest CB, Berk R. Consumer experiences and provider perceptions of the quality of primary care: implications for managed care. J Fam Pract. 1998;46: 216–226. 9519019

40. Dobrow MJ, Paszat L, Golden B, Brown AD, Holowaty E, Orchard MC, et al. Measuring Integration of Cancer Services to Support Performance Improvement: The CSI Survey. Healthc Policy. 2009;5: 35–53. 20676250

41. Uijen AA, Schellevis FG, van den Bosch WJ, Mokkink HG, van Weel C, Schers HJ. Nijmegen Continuity Questionnaire: development and testing of a questionnaire that measures continuity of care. J Clin Epidemiol. 2011;64: 1391–1399. doi: 10.1016/j.jclinepi.2011.03.006 21689904

42. Elwyn G, Barr PJ, Grande SW, Thompson R, Walsh T, Ozanne EM. Developing CollaboRATE: a fast and frugal patient-reported measure of shared decision making in clinical encounters. Patient Educ Couns. 2013;93: 102–107. doi: 10.1016/j.pec.2013.05.009 23768763

43. Campbell HS, Hall AE, Sanson-Fisher RW, Barker D, Turner D, Taylor-Brown J. Development and validation of the Short-Form Survivor Unmet Needs Survey (SF-SUNS). Support Care Cancer. 2014;22: 1071–1079. doi: 10.1007/s00520-013-2061-7 24292016

44. Valentijn PP, Vrijhoef HJ, Ruwaard D, Boesveld I, Arends RY, Bruijnzeels MA. Towards an international taxonomy of integrated primary care: a Delphi consensus approach. BMC Fam Pract. 2015;16: 64-015-0278-x.

45. Singer SJ, Burgers J, Friedberg M, Rosenthal MB, Leape L, Schneider E. Defining and measuring integrated patient care: promoting the next frontier in health care delivery. Med Care Res Rev. 2011;68: 112–127. doi: 10.1177/1077558710371485 20555018

46. Mead N, Bower P. Patient-centred consultations and outcomes in primary care: a review of the literature. Patient Educ Couns. 2002;48: 51–61. 12220750

47. Valentijn PP. Rainbow of Chaos: A study into the Theory and Practice of Integrated Primary Care: Pim P. Valentijn, [S.l.: s.n.], 2015 (Print Service Ede), pp. 195, Doctoral Thesis Tilburg University, The Netherlands, ISBN: 978-94-91602-40-5. Int J Integr Care. 2016;16: 3.

48. Pimouguet C, Le Goff M, Thiebaut R, Dartigues JF, Helmer C. Effectiveness of disease-management programs for improving diabetes care: a meta-analysis. CMAJ. 2011;183: E115–27. doi: 10.1503/cmaj.091786 21149524

49. Renders CM, Valk GD, Griffin S, Wagner EH, Eijk JT, Assendelft WJ. Interventions to improve the management of diabetes mellitus in primary care, outpatient and community settings. Cochrane Database Syst Rev. 2001;(1): CD001481. doi: 10.1002/14651858.CD001481 11279717

50. Gonseth J, Guallar-Castillon P, Banegas JR, Rodriguez-Artalejo F. The effectiveness of disease management programmes in reducing hospital re-admission in older patients with heart failure: a systematic review and meta-analysis of published reports. Eur Heart J. 2004;25: 1570–1595. doi: 10.1016/j.ehj.2004.04.022 15351157

51. Roccaforte R, Demers C, Baldassarre F, Teo KK, Yusuf S. Effectiveness of comprehensive disease management programmes in improving clinical outcomes in heart failure patients. A meta-analysis. Eur J Heart Fail. 2005;7: 1133–1144. doi: 10.1016/j.ejheart.2005.08.005 16198629

52. Badamgarav E, Weingarten SR, Henning JM, Knight K, Hasselblad V, Gano A Jr, et al. Effectiveness of disease management programs in depression: a systematic review. Am J Psychiatry. 2003;160: 2080–2090. doi: 10.1176/appi.ajp.160.12.2080 14638573

53. Neumeyer-Gromen A, Lampert T, Stark K, Kallischnigg G. Disease management programs for depression: a systematic review and meta-analysis of randomized controlled trials. Med Care. 2004;42: 1211–1221. doi: 10.1097/00005650-200412000-00008 15550801

54. Kruis AL, Smidt N, Assendelft WJ, Gussekloo J, Boland MR, Rutten-van Molken M, et al. Integrated disease management interventions for patients with chronic obstructive pulmonary disease. Cochrane Database Syst Rev. 2013;(10):CD009437. CD009437. doi: 10.1002/14651858.CD009437.pub2 24108523

55. Reeve BB, Wyrwich KW, Wu AW, Velikova G, Terwee CB, Snyder CF, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res. 2013;22: 1889–1905. doi: 10.1007/s11136-012-0344-y 23288613

56. GBD 2016 Healthcare Access and Quality Collaborators. Measuring performance on the Healthcare Access and Quality Index for 195 countries and territories and selected subnational locations: a systematic analysis from the Global Burden of Disease Study 2016. Lancet. 2018;391: 2236–2271. doi: 10.1016/S0140-6736(18)30994-2 29893224

57. Wensing M, Grol R, Smits A. Quality judgements by patients on general practice care: a literature analysis. Soc Sci Med. 1994;38: 45–53. doi: 10.1016/0277-9536(94)90298-4 8146714

58. Coyle J. Understanding dissatisfied users: developing a framework for comprehending criticisms of health care work. J Adv Nurs. 1999;30: 723–731. doi: 10.1046/j.1365-2648.1999.01137.x 10499230

59. Edwards C, Staniszewska S, Crichton N. Investigation of the ways in which patients’ reports of their satisfaction with healthcare are constructed. Sociol Health Illn. 2004;26: 159–183. doi: 10.1111/j.1467-9566.2004.00385.x 15027983

60. Staniszewska SH, Henderson L. Patients’ evaluations of the quality of care: influencing factors and the importance of engagement. J Adv Nurs. 2005;49: 530–537. doi: 10.1111/j.1365-2648.2004.03326.x 15713185

61. Berwick DM, James B, Coye MJ. Connections between quality measurement and improvement. Med Care. 2003;41: I30–8. doi: 10.1097/00005650-200301001-00004 12544814

62. Coulter A, Ellins J. Effectiveness of strategies for informing, educating, and involving patients. BMJ. 2007;335: 24–27. doi: 10.1136/bmj.39246.581169.80 17615222


Článok vyšiel v časopise

PLOS One


2019 Číslo 9
Najčítanejšie tento týždeň
Najčítanejšie v tomto čísle
Kurzy

Zvýšte si kvalifikáciu online z pohodlia domova

Aktuální možnosti diagnostiky a léčby litiáz
nový kurz
Autori: MUDr. Tomáš Ürge, PhD.

Všetky kurzy
Prihlásenie
Zabudnuté heslo

Zadajte e-mailovú adresu, s ktorou ste vytvárali účet. Budú Vám na ňu zasielané informácie k nastaveniu nového hesla.

Prihlásenie

Nemáte účet?  Registrujte sa

#ADS_BOTTOM_SCRIPTS#