Scale and quality in Nordic hospitals

Empirical analysis of hospitals in production economics often find little or no evidence of scale economies and quite small optimal sizes. Medical literature on the other hand provides evidence of better results for hospitals with a large volume of similar procedures. Based on a sample of Nordic hospitals and patients, we have examined whether the inclusion of quality variables in the production models changes estimates of scale elasticity. A sample of 58 million patient records from 2008 and 2009 in 149 hospitals in Denmark, Finland, Norway and Sweden were collected. Patient data DRG-points were aggregated into 3 outputs (medical inpatients, surgical inpatients and outpatients) and linked to operating costs for 292 observations. The patient data were used to calculate quality indicators on emergency readmissions and mortality within 30 days, adjusted for age, gender, comorbidities, hospital transfers and DRG using DRG-specific logistic regressions. The hypothesis that the elasticity of scale increases when quality variables are included was tested against the null hypothesis of no change in the scale elasticity. The observations were used to estimate a cost function using Stochastic Frontier Analysis (SFA). Country dummies as well as dummies for University hospitals, capital city hospitals and the average travelling time for the patients were included as environmental variables. The estimated scale elasticities did not change with the inclusion of quality indicators in any of the tested models. This may be because medical volume effects are confined to few patient groups or possibly even offset by effects on other groups, where quality is reduced by volume. In one model, the scale elasticity was significantly larger than 1.0, a result that contradicts previous studies which have found decreasing returns. JEL classification: C14, I12


Introduction
Considerations on scale economies are important in determining the optimal hospital structure within a country or region.If there are economies of scale in hospital production, larger hospitals would have lower average costs than smaller hospitals, and hence a hospital structure with few large hospitals would be cost saving for the health sector.If travelling costs and medical outcomes are not adversely affected by a centralised hospital structure, larger hospitals may also be socially optimal.
There have been hospital mergers and increased centralisation of hospital services in many countries.However, empirical analysis of hospitals in production economics often find little or no evidence of scale economies and quite small optimal sizes.Aletras et al. (1997) reported a survey of a large number of empirical studies of economies of scale in hospital services production, with a view to giving recommendations to the British National Health Service (NHS) on the desirability of hospital mergers.Most of the included studies indicated that there are few economies of scale in hospitals beyond 200-300 beds.These findings were unrelated to the analysis being based on flexible cost functions, flexible production functions, Data Envelopment Analysis, survival analysis, studies of multihospital firms or more ad hoc studies.However, a Canadian study that estimated optimal hospital size to be 179 beds found that the statistical models were not optimal for special or very large hospitals (Preyra and Pink 2006), and another recent Canadian study demonstrated that optimal scale varied across locations (Asmild et al. 2013).
Studies on Nordic data have usually come to the same result.In e.g.Kittelsen et al. (2015b) there was evidence of diseconomies of scale in Denmark, Finland and Norway, while Sweden showed economies of scale, but these findings may be because the units of observation was mainly physical hospitals in the first three countries while Sweden only had data at the county level ("Landsting").A long data series of Norwegian hospitals 1999-2014 also found the optimal scale to be quite small (Anthun et al. 2017).
On the other hand, a Danish study identified moderate-to-significant economies of scale and scope (Kristensen et al. 2012).In addition, medical literature provides evidence of better results for hospitals with a large volume of similar procedures.Birkmeyer et al. (2002) examined the relationship between mortality and volume for six different types of cardiovascular procedures and eight types of major cancer resections between 1994 and 1999 in the US, and found evidence of lower mortality with increased volume for all procedures, although the strength of the relationship varied.In a follow-up study, Reames et al. (2014) found that the relationship had strengthened over a 10-year period for five out of eight procedures examined and only weakened in one.Similar results were found for six surgical procedures in Tsai et al. (2013).In a Finnish context, Makela et al. (2011) found evidence that a high volume of total hip replacements in primary osteoarthritis patients reduced hospital costs through reduced length of stay, and may also have increased quality because the hip dislocation rate was reduced.An official report based on Swedish data gave some support for concentrating selected procedures to gain higher quality (SOU 2015:98).
It may be that the quality of hospital services increases with volume, without there being any cost saving associated with the increased quality.In order to examine how volume interacts with costs and quality, data on both indicators at a disaggregated level are needed.Until recently, mortality information has only been available at the level of procedures or diagnosis, while cost information is generally only consistent at the hospital level.As part of the EuroHOPE project (Heijink et al. 2015), a recent study of Nordic hospitals has used linked patient-level and hospital-level data to examine the relationship between productivity/costs and quality (Kittelsen et al. 2015a).
The present study builds on the data collected for the Kittelsen et al. (2015a) article, and on the case-mix corrected quality indicators developed there.That article applied data envelopment analysis (DEA) and found statistically significant quality differences between countries and hospitals for a variety of case-mix adjusted performance measures, but only weak relationships between productivity and quality.As explained in section 3 below, the quality indicators were case-mix adjusted using logistic regression on patient characteristics and diagnosis information.They considered 11 quality indicators, including two measures of readmissions (emergency and inpatient), four mortality rates (within 30, 90, 180 and 365 days) and five patient safety indices (PSIs).The PSIs were only relevant to a very small share of patients, and the readmission and mortality indices respectively were highly correlated.The authors found a statistically significant association between low mortality and high productivity (low costs), so that hospitals could become more efficient by decreasing mortality and costs at the same time.However, the study found an association between low readmission rates and high costs, indicating the opposite, that quality (fewer readmissions) is, in fact, costly.
The theory of scope implies that there is a close relationship between the scale properties and the substitution between quality and quantity, since these are two aspects of the output.The returns to scale could in general be a function of the relative mix of output aspects.If an estimate of the cost function (cost frontier) is controlled for quality, then it is possible that for constant quality there could be more cost savings with increased volume.It is the main aim of this present study to test if quality-related economies of scale existed in Nordic hospital production.The Nordic EuroHOPE data is re-used for testing whether controlling for quality increases estimates of the elasticity of scale.
There could be important policy implications if increased volume allows for reduced unit costs with the same quality, or alternatively allows for increased quality at unchanged unit costs.

Data
To perform the analysis in this study we used data on hospital input and both outputs and outcomes.The productivity analysis utilized a single input of hospital costs, and three weighted outputs (medical inpatients, surgical inpatients, outpatient visits) based upon patient level discharge register data from 2008 and 2009.To account for case-mix patient discharges were grouped into Diagnosis Related Groups (DRGs) using a common Nordic grouper, as explained below.To calculate 30 days mortality, demographic data have been collected for 2010 as well.More details are available in Medin et al. (2013) and Anthun et al. (2013).All somatic hospitals with a 24 hours emergency department or at least two medical or surgical specialities were included, except in Sweden due to problems with the cost data as explained in the next section.Table 1 gives descriptive statistics and summary definitions for the variables in the analysis.

Cost data
The hospitals costs included all production-related costs from somatic hospitals.Costs were harmonized across countries by excluding costs for ambulances, VAT, capital costs, purchased care and costs for teaching and research.
In Sweden, the cost data were assembled mainly from the Swedish Association of Local Authorities and Regions through the cost per patient database, from hospital annual reports, and from Statistics Sweden.The hospitals not recorded in these sources were sent a cost survey.Some counties couldn't allocate all cost for acute hospitals to individual hospital units due to either accounting principles or organisational merges of hospitals.For these counties the entire county was considered as one unit of analysis (Blekinge, Dalarna, Gotland, Gävleborg, Kronoberg and Sörmland).In addition, three counties couldn't deliver the costs for acute hospitals (Värmland, Västmanland and Jämtland) and were excluded from the analysis.As can be seen from Table 1, the total number of units for Sweden was n=52 (year 2008) and n= 54 (year 2009).There were 46 individual Swedish hospitals in year 2008 and 48 hospitals in year 2009, together with six counties for each year.The number of missing hospitals was 6 units both years.
In Norway, the cost data were derived from the SAMDATA database of Norwegian specialised care published annually by the Directorate of Health.The National Institute for Health and Welfare in Finland collects hospital cost data annually as a part of hospital productivity statistics production, while annual productivity reports published by the Ministry of Health contained the Danish cost data.

2.2
Cost level deflator The collected cost data were measured in nominal prices in each country, and the costs were deflated to create real costs in each country.There were differences in input prices between the countries, and to allow for comparison between countries the cost level had to be harmonized.For the period 2008-2009 hospital cost deflators were not available from OECD or EuroStat, so wage indices were calculated for the nine most important personnel groups.The wage indices were based on official wage data for the nine separate groups and comprised all personnel costs including wage taxes and pension contributions (Anthun et al. 2013;Kittelsen et al. 2009;Medin et al. 2013).Personnel costs were the most important cost component with about 60 % of total hospital costs.For the other costs we used the Purchasing Power Parity adjusted GDP price index from OECD.To form the aggregate cost level deflator, input price indices should be weighted by fixed cost shares.The nine personnel group indices and the index for other costs were weighed with fixed Norwegian personnel cost shares for 2008, as personnel cost shares were not available for the other countries.

2.3
Patient level data Patient level data were collected from national administrative patient registries in all four countries.The level of data was departmental (speciality) discharges.Outpatient visits registered during inpatient stays were excluded.
Death outside of hospitals was collected by linking patient level data with other registers.In Denmark and Norway, the date of death was provided by the respective population registries.In Sweden and Finland, the time to death was collected by linking with the cause of death registries.

2.4
DRG grouping and weights Diagnosis related groups (DRG) is a system of classifying treatments into groups that are more homogenous than all possible combinations of diagnoses and procedures.It is much adopted in the western world as a system for reimbursement, in which each specific DRG group is assigned a specific price or weight, usually based on the average costs estimated for individual admissions in that DRG.
Finland, Norway and Sweden each have a national version of a common grouping system for the hospital visits, NordDRG, developed at the Nordic Casemix Centre.Denmark used to be part of NordDRG but developed a national adaptation, DkDRG, and started using this in 2002 (Ankjaer-Jensen et al. 2006).The NordDRG grouping as such is not used in practice in any Nordic country.A common grouping is desirable in order to enhance the comparability of the output measures and quality indices, and to remove the impact of some of the idiosyncrasies of the different health systems.All four countries have nation-wide patient registers, and all four countries use the same diagnosis and procedural classification systems: ICD10 and the Nomesco Classification of Surgical Procedures.For DRGgrouping, Datawell Oy Finland has developed a common Nordic grouper based on definitions from the Nordic Casemix Centre.This grouper allows similar grouper logic to be applied in all four Nordic countries, and all patient discharges were grouped in this common Nordic DRG grouper (see Kittelsen et al. (2015a); Kittelsen et al. (2015b) for similar use of fixed grouper).
To compare the outputs across countries, it is also necessary to apply identical weights for all countries.Such a common weight set does not exist, and the national weights are not directly applicable, since there are some differences between the national DRGs.Instead, a set of Finnish weights were developed: Cost weights were calculated from pooled 2008 and 2009 cost per patient data from Helsinki and Uusimaa hospital district in Finland grouped with the common Nordic grouper.The hospital district is the largest in Finland and its share of total cost of acute somatic care is about 25 %.During the years 2008-2009 the districts had an advanced cost accounting system.Finnish cost weights were used since cost per patient data were not available for the other countries.The weights are normalised so that the average weight of treatments across all DRGs was 1.0.
Basing these weights on average costs in a Finnish hospital district poses additional problems if these weights then reflect costs or incentives that are particular for Finland.However, using calibrated Swedish weights in the Kittelsen et al. (2015a) study showed results to be quite robust, and previous studies that exploit the variation in the use of activitybased financing in Nordic hospitals have found little effect on productivity (Kittelsen et al. 2008).
In the productivity analysis, the Finnish cost weights were used to aggregate the patient discharges into each of three outputs.Earlier studies have attempted a varied number of aggregation groups, with different results (Magnussen 1996).For the present study, we have chosen to aggregate the patient discharges into each of three outputs to capture important difference among hospitals while not being too specific.Using too many output groups would cause more hospitals to be only comparable to themselves, exhausting the degrees of freedom.The three outputs are DRG-weighted surgical inpatients, medical inpatients and outpatients as described in Table 1.

2.5
Quality indicators Of the 11 performance indicators calculated in Kittelsen et al. (2015a), this study has used only two: i) Emergency readmissions within 30 days and ii) out of hospital mortality within 30 days.The remaining available performance indicators where either variants of these with different time frames or emergency status, or were patient safety indicators that were relevant for far less than one percent of the patients.
Unlike planned readmissions, emergency readmissions as an inpatient within 30 days of a hospital discharge (but no sooner than the next day) are commonly interpreted as a signal of poor medical quality provided that proper case-mix adjustment has taken place (Leng et al. 1999;Morris et al. 2014;Tsai et al. 2013).Only readmissions as an inpatient were included in this indicator as coding practice for the emergency status of outpatients varied between countries.Although some level of readmissions is unavoidable, an emergency readmission could be a sign that the initial treatment was not adequate, or that the discharge was premature.A recent study found a strong association between post discharge complications and readmissions (Morris et al. 2014).We included emergency readmissions for any reason, since poor quality in the initial treatment (e.g. an operation) potentially can cause a readmission with another diagnosis (e.g. an infection).Country differences in the readmission rates (reported in Table 1) were considerable, with Denmark at less than 5 % and Norway at almost 7 %.The coverage, and thus quality, in Sweden of the variable reflecting whether the admission is acute or planned, was poor, and therefore we performed separate analyses with Sweden but without emergency readmissions.
The out of hospital mortality rate is probably the most widely accepted quality indicator.Ross et al. (2010) showed that higher-volume hospitals was associated with a reduction in 30-day mortality for major medical conditions.Even though a share of mortality is unavoidable, lowering mortality will always be an improvement.The use of mortality rates to punish or reward hospitals or wards in pay for performance schemes (P4P) is controversial (Lilford and Pronovost 2010;Nicholl 2007), but in this analysis we are only interested in the statistical association between mortality and costs.An additional advantage of mortality as quality indicator is the small measurement error.
In this analysis, we included only death within 30 days of admission.However, a terminally ill patient could potentially have several hospital stays within the last days of life (with variance likely between type of illness, type of hospital and between countries), but to avoid exaggerating national differences in the treatment of terminal patients we have calculated a mortality dummy only for the last hospital visit before death.

2.6
Case mix adjusting variables Case-mix adjustment used a logit regression model and variables available in the patient registers, resulting in a performance indicator for each hospital with a value above 1.0 if quality was lower than the Nordic average.The exact procedure is explained in Kittelsen et al. (2015a).
Ideally, the adjusting variables should capture characteristics of the patients and their illnesses that possibly influence the outcome, regardless of the treatment given by the hospital.The main risk adjustor used was the DRG group assigned by the common Nordic grouper.Since the division into the more than 700 DRGs was designed to capture most measurable patient differences that may influence costs they will also capture many of the aspects that influence the expected values of the quality indicators.In addition to the DRG adjustment, data was also adjusted for gender as well as age in 10-year groups.The age group from 0 through 9 was divided into two groups; one for infants (age 0) and one for children (age 1-9).For data privacy reasons, the exact age was not available in the pooled cross-country dataset.Although partly endogenous, selected treatment variables were also allowed to adjust for risk, since these may reflect severity.For describing patient transfers in and out of hospital or department variables indicating where the patients came from and where they went, were selected.These variables did not, however, distinguish between transfer to/from home, a (non-hospital) health clinic or a nursing home in all four countries.Comorbidity was included both as the number of secondary diagnoses, and as the Charlson comorbidity index which is based on the severity of secondary diagnoses (Charlson et al. 1987).
The case-mix adjustment corresponded to model 2 in Kittelsen et al. (2015a), and excluded length of stay, as this may be partially endogenous; as well as various characteristics of the residential municipality of the patient, as these were found to have little or no impact on quality indicators.The case-mix adjustment was performed by indirect standardisation through a logit estimation on the adjusting variables within each DRG, and the performance indicator was then calculated as the observed divided by the predicted quality level given the patient mix of each hospital.This indirect standardisation ensures that e.g.having many patients in a DRG with high mortality only penalises hospitals that have higher than average mortality in that DRG, nor will having many patients in DRGs with high readmission rates by itself increase the readmission performance measure.Heterogeneity in observable patient mix variables does not then bias the performance measures.The performance indicators listed in table 1 are normalised to 1.0 for the average patient, although this would not be the average across hospitals.Higher numbers indicate worse quality; Figure 1 shows the hospital mean performance indicators together with the 95 % confidence intervals for each hospital.Even though the countries performed significantly different, there were many hospital confidence intervals that overlap, and a ranking of hospital performance can only be partial.

Method
The hypothesis that the elasticity of scale increases when quality variables are included was tested against the null hypothesis of no change in the scale elasticity.Earlier evidence (Kittelsen et al. 2015b) suggested substantial inefficiency in the Nordic hospital sector, so the observations were used to estimate a cost function using Stochastic Frontier Analysis (SFA) which allows for the separate estimation of inefficiency and stochastic noise.
As discussed, the scale elasticity may well vary with output mix and size.Here we are interested in the simplest empirical specification that allows us to test our hypothesis.The Cobb-Douglas functional form was chosen since the estimated scale elasticity is then a constant in the domain of the function and thus provides a single statistic that can be used to test the hypothesis.The Cobb-Douglas 1 cost function has the form: Where C is the necessary (minimum) costs for producing the output vector y, and ,  β are parameters.Input prices also enter the theoretical cost function, but it has not been possible to distinguish between different inputs at the hospital level, and the single input price was normalised to 1.0.The output coefficients β have the interpretation of cost elasticities and are the percentage increase in costs with a one per cent increase in each output.The scale elasticity is defined as the increase in production resulting from a proportionate increase in inputs, which for the cost function translates to the inverse of the increase in costs resulting from a proportionate increase in output: 1 The original Cobb & Douglas was defined with constant returns to scale by imposing  (2) The empirical specification, in addition to the quantitative output variables y (the three DRG-weighted patient aggregates surgical inpatients, medical inpatients and outpatients) included a vector z of country dummies as well as dummies for University hospitals, capital city hospitals and the average travelling time for the patients as environmental variables or cost shifters.Thus, the model allowed for e.g. the cost in one country to be higher than in another by a proportionality factor.Finally, the quality performance indicators q were controlled for in the estimated equation model: The estimation procedure used the standard stochastic frontier analysis (SFA) specification of decomposing the error term into an inefficiency term u and a stochastic error v (See e.g.Fried et al. 2008).Since data constituted an unbalanced panel of observations for two years, a panel frontier estimator for the separate distributions of the inefficiency and stochastic error terms was used.The assumption is that the total variation of observed costs around the predicted costs can be partly due to a one-sided inefficiency (always positive) and partly due to a symmetric stochastic error, here assumed to be normally distributed.In many panel models, inefficiency is modelled as a fixed effect for each hospital, but with only two years a fixed effects model is precluded by a lack of degrees of freedom.Following Battese and Coelli (1988) the model included a time-invariant inefficiency term ui that did not depend on year t, but instead by assumption has a truncated normal distribution2 : where the statistic  is the share of the inefficiency variance in the total error variance, i.e.
the importance of inefficiency in the estimated model.The tested hypothesis was that the scale elasticity estimate  was larger with quality variables included than the estimate without quality variables.If higher quality is associated with a larger optimal scale, then one should expect the estimated elasticity of scale to be larger (and thus by (2) the sum of the cost elasticity estimates  to be smaller) in the model with quality variables than in the model without quality variables.The hypothesis was tested by computing ε with and without the quality term in equation ( 3) and whether the estimates were significantly different at the 95% level.

Results
The results of the regressions with and without the quality term are tabulated in Table 2.All models estimated had a very high goodness of fit, and the coefficients on the three output variables were all reasonable and highly significant.The output coefficients in a Cobb-Douglas cost function have the interpretation of cost elasticities, so that e.g. a  coefficient of 0.194 for the surgical inpatients in the first model implies that a 1% increase in the DRGweighted number of surgical inpatients leads to a predicted 0.194% increase in total costs.
Cost elasticities for surgical inpatients were stable across model specifications, but varied more for medical inpatients and outpatients, probably reflecting different criteria for choosing outpatient treatment in Sweden than in the other countries.In the models without Sweden, real costs were about 2 % higher in 2009 than in the reference year 2008 and significantly higher in Norway than in Finland, where Finland is the reference case.Travel time and University hospital significantly increased costs.University hospitals had an estimated 13 % higher cost than non-university hospitals.Location in the capital of each country did not seem to affect costs.With Sweden included, this country had higher costs than Finland, while the 2009 dummy was insignificant.Other coefficients were essentially unchanged.All models estimated that the larger part of the variation of observed costs around predicted costs was due to inefficiency rather than stochastic noise, since the inefficiency share (  ) was estimated between 0.82 -0.91.
As for the main hypothesis to be tested, the estimated scale elasticities did not change with the inclusion of quality indicators.With both quality indicators (mortality and emergency readmissions) included and Sweden excluded the point estimates of the scale elasticity changed from 1.041 to 1.039.With only the mortality indicator included the estimates changed from 1.049 to 1.048.Both changes were clearly insignificant.Thus we did not find support for our hypothesis that higher quality is associated with a larger optimal scale.
Perhaps surprisingly, contrary to previous literature estimates, the scale elasticity was estimated as larger than one in all models, significantly so if Sweden was included.Thus, all models indicated increasing returns to scale.While the significance levels of the scale elasticities being different from 1 changes when Sweden is included, the differences between the models (in the order of 0.006-0.009)are not significant.
Robustness has been examined (not reported in tables) by instead estimating a) an ordinary least squares (OLS) model, b) a SFA model that disregard the panel structure of the data, c) a translog cost function, d) a model where outpatients are counted instead of DRG-weighted, and e) a model without cost shifters/environmental variables such as Capital city, Travel time and University hospital.Since it has been suggested (Hvenegaard et al. 2011) that the quality effect on costs might be U-shaped, a sixth robustness check f) adding quadratic performance was performed.Results were essentially the same in all specifications, with no statistically significant changes in the estimated elasticity of scale when including quality variables.

Discussion
In this analysis, the inclusion of quality performance indicators in no way changed the scale properties.Thus, even if large hospital volume enhances quality, there seem to be no impact on costs when holding quality constant compared to the scenario when quality is disregarded.With the estimated scale properties, unit costs can be reduced by increasing volume without changing quality.This may be due to the absence of effects of quality on costs in general, but it may also be due to mechanisms that work in different directions.The medical volume effects on quality seem to have been mostly studied for surgical procedures, and it may be that non-surgical hospital treatments show medical diseconomies of scale.
The applied measures of quality performance may be too restrictive, only covering some of the relevant patient groups.While mortality and emergency readmissions are important quality aspects that are clearly relevant, there are many patients that have low risk of either.Other aspects of treatment can influence health outcomes and patient satisfaction in particular groups and be major cost drivers for the hospitals.Unfortunately such aspects are either too disease specific (such as the patient safety indicators), or are not systematically available as data at the hospital level.
Since this study is based on aggregate hospital volume, it does not consider the volume per physician and the threshold effect linked to individual physicians' annual workload for a specific procedure (Ravi et al. 2014).The volume-outcome relationship might also differ across medical specialities and disappear on an aggregate level.The disease-specific quality instruments would be relevant in an analysis of costs and quality at the departmental or speciality level, but there is generally a lack of resource use data or cost data at this level.
Finally, the quality indicators may be insufficiently case-mix adjusted.If larger hospitals treat more severe patients, everything else being equal, then one should expect both mortality rates and emergency readmissions to be higher in large hospitals.The quality indicators used control for differences between DRGs, patient age, gender, comorbidities and transfer patterns.This analysis further controls for differences between countries (i.e.different standards or guidelines), between university and non-university hospitals and for capital cities hospitals.However, if the patients treated e.g. in a large university hospital are more severe than those treated in a small university hospital, within each country, for given DRG and patient characteristics, then there could remain unobservable quality-related economies of scale that are not captured.
Even without quality variables the cost function estimates were characterised by increasing returns to scale.Attempts at finding the reason for this result and its deviation from what has been found in previous literature have not been successful.Methodologically, a change from the specification of e.g.Kittelsen et al. (2015b), which uses data for the earlier period 2005-2007, is that outpatients are now DRG-weighted instead of just being counted.The robustness exercise d) mimicked the previous specification but did not change the scale properties.Some earlier studies may also have failed to take account of potential cost disadvantages that are correlated with hospital size, such as university hospital status, capital city location or travelling times, in essence disregarding the possibility that large hospitals have more severe patients or other tasks such as teaching and research.However, estimates on this dataset but excluding these environmental variables did not change the scale properties.A remaining possibility is that recent developments in hospital treatment technology have enhanced returns to scale through e.g. more specialised procedures or machinery.
However, there seems to be no evidence in this dataset that medical returns to scale measured at the hospital level provides any additional justification for larger hospitals over and above any economic returns to scale.

Conclusion
The analysis did not support the existence of medical volume effects on key quality indicators of a sufficient strength to increase the scale elasticity at the hospital levels.This may be because medical volume effects are confined to few patient groups or possibly even offset by other groups where quality could be reduced by volume.
The results indicate that there is a potential to reduce costs per treatment by increasing hospital size, without sacrificing or enhancing quality as measured by mortality and readmission rates.Since the scale properties are in contrast to findings in several previous studies, it might be premature to take this as an argument for bigger hospitals in general.In addition, a full analysis must take patient distance-related costs and health effects into account.
If the medical volume effects are very different between patient groups, there might instead be reasons to reorganise the division of functions between large centralised hospitals and smaller local hospitals.Further research into the mechanisms behind medical returns to scale in different patient groups and hospital-level measures of performance is therefore warranted.

Figure 1 :
Figure 1: Case-mix adjusted performance measures for hospitals sorted by country, with 99% confidence intervals.Left panel is for mortality within 30 days (Mort30_LastDischarge).Right panel is for emergency readmissions within 30 days (Readm30_Emgergency), for which Sweden does not have data.Lower numbers indicate better quality.
form used in the literature does not impose this restriction.