=Paper= {{Paper |id=Vol-1683/hda16_li_l |storemode=property |title=Assessing the variation of pathology test utilisation volume by Diagnosis-Related Groups |pdfUrl=https://ceur-ws.org/Vol-1683/hda16_li_l.pdf |volume=Vol-1683 |authors=Ling Li,Elia Vecellio,Juan Xiong,Andrew Georgiou,Alex Eigenstetter,Trevor Cobain,Roger Wilson,Robert Lindeman,Johanna I Westbrook }} ==Assessing the variation of pathology test utilisation volume by Diagnosis-Related Groups== https://ceur-ws.org/Vol-1683/hda16_li_l.pdf
   Assessing the variation of pathology test
   utilisation volume by Diagnosis-Related
                    Groups
    Ling Li 1, Elia Vecellio 1,2, Juan Xiong 1, Andrew Georgiou 1, Alex Eigenstetter 2,3,
Trevor Cobain 2, Roger Wilson 3,4, Robert Lindeman 2, Johanna I Westbrook 1
    1
      Centre for Health Systems and Safety Research, Australian Institute of Health
Innovation, Macquarie University, Sydney, NSW, Australia
    2
      South Eastern Area Laboratory Services, NSW Health Pathology, NSW, Australia
    3
      Executive Unit, NSW Health Pathology, NSW, Australia
        4
          School of Medical Sciences, UNSW Medicine, Sydney, NSW, Australia


          Abstract. There are currently few meaningful data and analyses about variation in
          the use of pathology tests controlling for patient casemix. Diagnosis-Related Group
          (DRG) codes are allocated to all patients admitted into a hospital as inpatients. The
          aim of this study was to use these codes to examine pathology test volumes and
          variation between hospitals based on linked administrative datasets collected from
          the pathology service and hospitals. A total of 11,370,000 test orders over six years
          for inpatients at four hospitals with a single pathology provider were analysed in
          this retrospective cohort study. The crude and adjusted rates of tests ordered per
          patient day were estimated using Poisson models. The adjusted rate across all the
          hospitals revealed a general increase of pathology requests across time from 2008
          to 2012. Quality improvements in pathology requesting are dependent on the
          availability of quality data and meaningful analyses to identify variation between
          locations across time accounting for differences in patient characteristics. Using
          DRGs as a way to control for casemix can facilitate the meaningful examination of
          test utilisation and variation between hospitals and across time to improve quality
          use of pathology.

          Keywords. Diagnosis-Related Groups, test utilisation, Poisson modelling, casemix,
          pathology services, laboratory tests, data linkage



Introduction

Over the last three decades there has been considerable growth in the number of requests
for pathology and medical imaging services. The number of Medicare-funded laboratory
tests in Australia increased by 54% between the period 2000-2001 to 2007-2008.[1]
Improvements in the quality of pathology requesting rely upon reliable data regarding
current practices and the identification of areas requiring greater attention and support.
There are currently few meaningful data about variation in the use of pathology
investigations controlling for patient diagnostic groups. Diagnosis-Related Groups
(DRGs) provide a basis to compare profiles of pathology requesting for similar patient
groups across hospitals, specialties and by clinician.
     DRGs were developed at Yale University in the USA in the 1970s with the aim of
defining and measuring hospital performance.[2] DRGs developed into a system which
sought to pay hospitals based on the premise that money should follow the patient – a
model that is often referred to as Activity-Based Funding (ABF).[3] DRGs have also
been used as a means of monitoring care, improving transparency and enhancing
efficiency.[4] They should enable hospitals to receive funding appropriate to the number
and mix of their patients. DRGs achieve this by reducing the large variety of individual
hospital patient characteristics into manageable and meaningful groups that can then be
used to make comparisons across different settings, measure efficiency and effectiveness,
as well as monitor variation in the care that patients receive.[5] These benefits may also
provide incentive to stimulate productivity (e.g. patient throughput, reduced wait times,
rational test ordering etc.) and moderate growth in hospital costs.[3] In Australia, the
National Health Reform Agreement, signed by all Australian governments in August
2011, commits to funding public hospitals using ABF (with DRGs) where practicable.[5]
     Test volume refers to the total number of tests ordered for a given period. Assessing
test volume using a variety of methods, such as per test order episode, per patient
admission and per specific test type (e.g. Troponin), allows for a comprehensive analysis
of test utilisation in the pathology service. Using DRGs to assess the test volume can
control for many confounding factors, such as the type, severity and complexity of
patient conditions, which allows improved comparisons of test utilisations between
different facilities and over time. The aim of this study was to use DRGs to examine
pathology test volume and its variations between hospitals based on linked
administrative datasets collected from the pathology service and hospitals.


1. Methods

1.1. Study setting

This was a retrospective cohort study. The study included four hospitals serviced by a
single pathology laboratory service, including three large metropolitan general hospitals
with more than 500 beds (hospital A, E, and F) and one regional hospital with about 200
beds (hospital D). The hospitals in this study used the Australian Refined Diagnosis
Related Group (AR-DRG) Version 6.0. codes.[6] DRG codes are only allocated to
patients after they have been admitted, treated, and discharged from the hospital as
inpatients. The DRG code analysis, therefore, used an unconventional methodology.
DRG information, that only became available after the end of an inpatient encounter,
was retrospectively linked to pathology test order data created at a time when the
patient’s eventual diagnosis was still unknown.
     Ethics approval was granted by the South Eastern Sydney Local Health District
Human Research Ethics Committee (HREC; Project No. 11/146).

1.2. Data source and linkage

Two datasets were extracted for hospital clinical information systems: 1) all pathology
tests conducted by the pathology service departments for a period of six years (from
January 2008 to December 2013).); and 2) all inpatient admission information during the
same period. The two datasets were linked using medical record number, patient
admission and discharge dates and pathology specimen collection dates. The final linked
dataset was established after validity and integrity testing of source data.
1.3. Data analysis

To assess test utilisation volume for inpatients at different hospitals over six years,
Poisson modelling was adopted. The average number of tests per patient day and its 95%
Confidence Intervals were estimated from the following models: 1) with an adjustment
for hospital and year, and 2) with an adjustment for hospital, year, DRG, age and gender.
The average number of tests per patient day calculated from the first model was called
the crude rates in comparison with the adjusted rates from the second model. Data
analyses were conducted using SAS Institute Statistical Analysis System (SAS) version
9.4.


2. Results

There were 11,370,000 orderable tests for inpatients at four hospitals over six years. For
example, an orderable liver function test (LFT) consists of a group of tests that are
performed together. The pathology testing profile by DRGs varied considerably between
hospitals. The top ten DRGs with the highest pathology test utilisation across all four
hospitals for all years were Rehabilitation (Z60A and Z60B), Tracheostomy-related
(A06A and A06B), Haemodialysis (L61Z), Bowel procedures (G02A), Chest Pain
(F74Z), Respiratory infections (E62A), Septicaemia (T60A) and Chronic Obstructive
Airways Disease (E65B). Figure 1 shows the proportion of pathology tests in each
hospital accounted for by patients classified with each of the top ten DRGs.




Figure 1. The top 10 DRGs accounting for the highest pathology test utilisation by hospitals (A, D, E, and F;
Cat CC: catastrophic complication or comorbidity). DRGs are shown in decreasing order based on pathology
                                             test utilisation
2.1. Crude rates

The crude rates, i.e. the mean number of tests per patient day for the four general
hospitals over the six year study period, were estimated from the model with an
adjustment for hospital and year (Figure 2). Hospital D had much higher mean rates of
tests per patient day than the other three hospitals (a difference of around 0.5 tests per
patient day for 2008 to 2012 and 0.25 tests per patient day in 2013). Secondly, Hospital
F showed greater variation in mean test rates per patient day over six years, ranging from
3.7 in 2009 to 4.2 in 2012. Hospitals A and E had very similar mean rates of tests per
patient day. Lastly, all hospitals had lower mean test utilisation in 2013 than in 2012 and,
as already noted, the reduction was most dramatic at Hospital D.
   Mean Number of Tests per Patient Day
              (95% CIs)




Figure 2. The crude mean rate of pathology test volume per patient day with 95% Confidence Intervals at the
four general hospitals A, D, E and F over the six-year study period



2.2. Adjusted rates

Figure 3 shows the comparison of mean tests per patient day at four hospitals across the
six year study period with an adjustment for hospital, year, DRG category, patient age in
years and patient gender. When comparing Figure 3 to Figure 2, it is noticeable that the
adjusted mean rate of tests per patient day was lower than the crude rate (ranging from
3.0 to 3.9 tests per patient day, rather than 3.9 to 4.7 tests per patient day as shown in
Figure 2). Secondly, when controlling for DRGs and patient characteristics, Hospital D
was no longer the hospital with the highest mean rate of test orders per patient day. While
Figure 2 showed that Hospitals A and E had very similar ‘crude’ mean tests per patient
day rates, controlling for DRGs and patient characteristics shows that Hospital E had a
higher mean rate of test utilisation, that exceeded Hospital A by between 0.4 and 0.8 tests
per patient day.
     The temporal characteristics of Figure 3 show that the adjusted mean test rates
generally increased with time from 2008 and 2009 through to 2011 and 2012, while
Figure 2 shows the crude mean test rates were generally unchanged. Both figures show
that the mean test rate per patient day was lower in all hospitals in 2013 than it was in
2012.
   Mean Number of Tests per Patient Day
              (95% CIs)




Figure 3. The adjusted mean rate of pathology test volume per patient day with 95% Confidence Intervals,
adjusting for hospital, year, DRG, age and gender, at the four general hospitals A, D, E and F over the six-
year study period



3. Discussion

Based on the linked datasets from different clinical information systems, we applied
Poisson modelling approach to examine pathology test volumes and variations in rates
between hospitals. The availability of quality data through linking administrative
datasets collected from the pathology service and hospitals provide the starting point to
make meaningful investigation of pathology services and to enable our understanding of
the effect of pathology testing on patient outcomes, such as length of stay.[7]
     Using DRGs as a way of controlling for casemix in the analysis models facilitates
the clinically meaningful description and categorisation of the illness or injury sustained
by the patient in terms of the hospital resources required to treat them. We observed that
pathology testing profiles varied considerably between hospitals for patients in the same
DRGs. The crude rate of tests per patient day took into account patients’ length of stay
in hospital, but fails to consider other casemix variables (such as DRGs) and patient
characteristics (such as age and gender). The adjusted mean rate of tests per patient day
was lower than the crude rate. The comparisons across hospitals showed variation in the
mean rates of tests per patient day among hospitals in the same calendar year. The
adjusted rate across all the hospitals also revealed a general increase in pathology
requests across time from 2008 to 2012. The adjusted rate was also higher in 2012 when
compared to 2011 for Hospitals A, D and F but not for E.
     The ordering of pathology tests can vary significantly across hospitals independent
of patient acuity and the types of medical services available.[8] In this study, we found
the variation in the test utilisation between hospitals after adjusting for DRGs and patient
age and gender, which could indicate there were differences in the clinical work practices
in different hospitals. There are many reasons that may cause variations in clinical work
practices. These can include pressure from the patient, peers or the hospital, clinical
curiosity, insecurity or even habits. [9]
     Quality improvements in pathology requesting are dependent on the availability of
quality data and meaningful analyses to identify variation between locations across time
and account for differences in patient characteristics. DRGs are a valuable component of
any attempt to examine pathology test utilisation and how it varies between different
hospitals and across time. Using the methodology presented in this study, we will further
examine the test utilisation on some specific tests, for example, Urea, Electrolytes, and
Creatinine (EUC) as one of the most frequently ordered (potentially over-ordered) tests.
This is an important step in improving quality use of pathology.


References

[1]     C. Bayram, H. Britt, G. Miller, and L. Valenti, "Evidence-practice gap in GP pathology test
        ordering: a comparison of BEACH pathology data and recommended testing," presented at the
        University of Sydney, 2009.
[2]     R. B. Fetter, Y. Shin, J. L. Freeman, R. F. Averill, and J. D. Thompson, Case mix definition by
        diagnosis-related groups, Medical care (1980), i-53.
[3]     K. S. Palmer, D. Martin, and G. Guyatt, Prelude to a Systematic Review of Activity-Based Funding
        of Hospitals: Potential Effects on Health Care System Cost, Quality, Access, Efficiency, and Equity,
        Open Medicine 7 (2013), 94-97.
[4]     R. Busse, A. Geissler, W. Quentin, and M. Wiley, Diagnosis-Related Groups in Europe: Moving
        towards transparency, efficiency and quality in hospitals, McGraw-Hill International, 2011.
[5]     Independent Hospital Pricing Authority (IHPA), "Activity based funding for Australian public
        hospitals: Towards a Pricing Framework," ed, 2015.
[6]     Australian Institute of Health and Welfare (AIHW), "Classifications: Australian Refined Diagnosis
        Related Groups (AR-DRGs)," ed, 2011.
[7]     L. Li, A. Georgiou, E. Vecellio, A. Eigenstetter, G. Toouli, R. Wilson, et al., The effect of laboratory
        testing on emergency department length of stay: a multihospital longitudinal study applying a cross-
        classified random-effect modeling approach, Acad Emerg Med 22 (2015), 38-46.
[8]     L. J. Crolla, P. W. Stiffler, S. Vacca, and S. McNear, "The Laboratory Manager: Role in Compliance,
        Organizational Structure, and Financial Management," in Clinical Chemistry - Laboratory
        Management and Clinical Correlations, K. Lewandrowski, Ed., ed Philadelphia: Lippincott
        Williams & Wilkins, 2002, pp. 51-63.
[9]     M. T. Elghetanyn and A. O. Okorodudu, "Management of Test Utilization " in Clinical Chemistry
        - Laboratory Management and Clinical Correlations K. Lewandrowski, Ed., ed Philadephia:
        Lippincott Williams & Wilkins 2002, pp. 223-330.