2. About this QMI report

This quality and methodology report contains information on the quality characteristics of the data (including the European Statistical System five dimensions of quality) as well as the methods used to create it.

 The information in this report will help you to:

  • understand the strengths and limitations of the data

  • learn about existing uses and users of the data

  • understand the methods used to create the data

  • help you to decide suitable uses for the data

  • reduce the risk of misusing data

Back to table of contents

3. Important points

  • We have reviewed the measurement and reporting of public service productivity (PSP) in the UK; this quality and methodology information (QMI) has been updated to reflect the latest improvements recommended in the National Statistician's Independent Review of the Measurement of Public Services Productivity.

  • Most of the statistics discussed in this QMI are accredited official statistics; estimates for social security administration and tax administration are official statistics in development because they are new and undergoing further development with suppliers and users.

  • The estimate for PSP is displayed as an index that shows the change of the amount of output provided for each unit of input over time; to remove the effect of price changes over time, public service output and inputs are measured in quantity terms (also referred to as volume terms), instead of expenditure terms.

  • Some public service area outputs are also adjusted for changes in the quality of activities and services provided, as recommended by the Atkinson Review (PDF, 1.08MB); a quality adjustment is a statistical estimate of the change in the quality of a public service that allows for observation of the outcome of a public service being provided, rather than the output alone.

  • Productivity estimates included in this QMI are multi-factor productivity estimates, as opposed to labour productivity estimates (a single-factor productivity measure), and are not comparable with our headline measures of whole-economy labour productivity; more information can be found in How to compare and interpret ONS productivity measures.

  • We produce these estimates to measure the productivity of total UK public services; they do not measure value for money or the wider performance of public services, and there are also differences between our annual and quarterly PSP estimates.

  • The statistics produced for each PSP service area are based on the United Nations Statistical Division's Classification of the functions of government (COFOG), which align data with specific industries and service areas, but not with government departments; we have investigated the links between COFOG and government departments to determine how data series captured within COFOG align with government departments.

  • Because of the coronavirus (COVID-19) pandemic, notable adjustments were required to capture non-quality and quality-adjusted output that reflected activities for separate service areas; the adjustments included alteration of data sources and methods, and were unique to individual service areas.

Back to table of contents

4. Quality summary

Overview

Total public service productivity (PSP) is estimated by comparing growth in the total output provided with growth in the total inputs used. If the growth rate of output exceeds the growth rate of inputs, productivity increases. This means that more output is being produced for each unit of input. If the growth rate of inputs exceeds the growth rate of output, then productivity will fall. This indicates that less output is being produced for each unit of input.

Output, inputs, and productivity for total public services are estimated by combining growth rates for individual services, using their relative share of total government expenditure as weights.

The PSP measures included in this quality and methodology information (QMI) are not directly comparable with our market sector multi-factor productivity estimates owing to differences in the methodologies used. There are also differences between our annual and quarterly PSP estimates, which are described in the Coherence and comparability subsection of Section 5: Quality characteristics of the data. For further information, see our How to compare and interpret ONS productivity measures methodology and our A simple guide to multi-factor productivity methodology.

We produce these estimates to measure the productivity of total UK public services. They do not measure value for money or the wider performance of public services. For example, they do not indicate whether the inputs have been purchased at the lowest possible cost, or whether all desired outcomes are achieved through the output provided.

The methodology for calculating these statistics is based on the recommendations of the Atkinson Review (PDF, 1.08MB) for the measurement of government output and productivity for the national accounts and the recommendations set out in the National Statistician's Independent Review of the Measurement of Public Services Productivity.

Estimates are published on a calendar-year basis to be consistent with the UK national accounts. Estimates are available both for total and individual service areas. These are included in our annual Public service productivity: total, UK articles. Detailed information on the sources and methods can be found in our Public service productivity estimates: sources and methods methodology.

Uses and users

Users of our PSP measures include:

  • UK government departments, including the Cabinet Office, HM Treasury, and regulatory bodies

  • the National Audit Office

  • the Office for Budget Responsibility

  • the NHS

  • press

  • general public

  • the Institute for Fiscal Studies (IFS)

  • the Institute for Government

  • the Nuffield Trust

  • the Productivity Institute

  • the Economic Statistics Centre of Excellence (ESCoE)

  • academic institutions

  • national statistical institutes

  • international statistical bodies

These users use the productivity estimates in several ways, including to inform previous IFS Green Budgets and give briefings to the Cabinet Office's ministers and permanent secretaries. We have also advised government departments on how to incorporate the general methodology of the estimates into their own work.

We have closely collaborated with government departments, academics, and other organisations across the UK to identify new data sources and innovative methods. This has led to the improvement of our existing published PSP estimates. These improvements were detailed in the National Statistician's Independent Review of the Measurement of Public Services Productivity. We also introduced these improvements in our annual Public service productivity: total, UK, 2022 article. We have summarised the impact of these improvements on our annual estimates in our Impact of improved methods on total public service productivity, 1997 to 2021 article.

This QMI focuses on our annual estimates. However, we also publish our official statistics in development in our more timely Public service productivity, quarterly, UK bulletins.

Strengths and limitations

Strengths

  • Recent developments have improved the accuracy, granularity, and timeliness of our published estimates of public services productivity.

  • Most data we use are administrative data from government departments, which means we are not reliant on surveys.

  • The data can be disaggregated in multiple ways, including by service area.

  • Inputs estimates are disaggregated by component, for example, by labour, goods and services, and consumption of fixed capital.

  • Productivity estimates are calculated with and without adjustments for the quality of output, so some data can be used to estimate the effect of quality change on public services output.

  • The open revisions policy allows us to continuously improve the dataset, which means that the estimates are not constrained by Blue Book procedures described in our UK National Accounts, The Blue Book: 2024 compendium.

Limitations

  • There is a two-year time lag in producing the annual estimates because of data availability, so our latest annual Public service productivity: total, UK, 2022 article covers the period 1997 to 2022; to account for this time lag, we also publish our official statistics in development in our Public service productivity, quarterly, UK bulletins.

  • We use several different ways to measure output when producing our statistics; some service areas are quality adjusted, some are measured directly using a cost-weighted activity index or a revenue adjusted-activity index (used for tax administration), and the remaining output is measured indirectly and assumed to be equal to inputs.

  • Areas where output is assumed equal to inputs (the "inputs-equals-outputs" approach), productivity growth is also assumed to be zero; 36.3% of total public service productivity is estimated using the inputs-equals-outputs approach.

  • There is no geographical breakdown of the estimate; the numbers given are for the UK as a whole.

  • When data are received on a financial-year or academic-year basis, we use a "cubic splining" process to generate a calendar-year trend from financial-year data.

  • Despite recent improvements, we continue to work with HM Treasury, other government departments, and academic experts to further develop PSP estimates across each service; this is a continually evolving area, and so productivity figures may not currently represent all activities.

Back to table of contents

5. Recent improvements by service area

There are several major changes that have been made to the associated public service productivity (PSP) statistics over the last two years. These are outlined in detail in the National Statistician's Independent Review of the Measurement of Public Services Productivity. This section summarises those improvements by service area.

Healthcare

We have implemented several improvements for healthcare outputs. We have equivalised unit costs for the same hospital procedures carried out through different modes of provision, and for ambulance services with and without transport to hospital. These changes have been backdated to 2014.

We have removed excess bed days activity from volume calculations for 2014 to 2018 to align with the change made to how these costs were captured in NHS's National Cost Collection from 2018 to 2019 onwards.

National screening services that were not previously captured in our output measure have been incorporated from 2014 onwards. This includes the abdominal aortic aneurysm, bowel cancer, breast, and cervical screening programmes.

We have revised the overall weight of general dental to address disproportionately high weights for activity growth created by changes during the coronavirus (COVID-19) pandemic.

We have also improved our quality adjustment for healthcare. A time quality adjustment factor, which applies the health gain from treatment discounted over remaining life expectancy, to non-elective procedures. This is in addition to the elective procedures this was already applied to. This improvement has been implemented back to 2014.

Education

We have made improvements for education inputs. "All" salaries from our Annual Survey of Hours and Earnings (ASHE) were used in previous estimates to inform salary data for teachers in Northern Ireland and Scotland, and support staff in England, Wales, and Scotland. Teacher salaries for England and Wales are still provided by the Department for Education (DfE) and StatsWales, respectively.

We have now updated full-time equivalent (FTE) salaries so they are consistent with the FTE staff numbers used in the calculation. This approach is consistent with other areas across public service productivity.

For education outputs there has been a number of changes that focus on improvements in data sources, capturing and categorising activity. We adopted these changes in our Public service productivity: total, UK, 2021 article.

Compulsory education was previously cost-weighted using four categories. To increase the granularity of the cost-weighting categories, compulsory education output for England now contains eight categories: four for local authority-maintained institutions and four for primary, secondary, special, and alternative provision academies.

Pre-primary education was previously captured in two categories. These categories have now been combined into a single pre-primary category. This is because national education departments said there was little difference in cost between the two categories, and there were data limitations in splitting expenditure by these two categories separately.

Pre-primary pupils in primary schools in England were previously captured as primary pupils. All pupils under 4 years of age at the start of the academic year are now captured as pre-primary activity, even if they are enrolled in a primary school.

Funding became available for some 2-year-olds in pre-primary education in the academic year 2013 to 2014. The extended entitlement of 30 hours was also introduced for 3- to 4-year-olds in families that met certain eligibility criteria in 2017. Updates to the output measure mean it now reflects these changes.

We previously applied a constant factor of 0.5 to calculate full-time equivalence (FTE) – our preferred measure – where part-time or full-time pupils are not identifiable. This has been replaced with a time- and country-varying FTE factor, based on average FTEs for this group.

Teacher training has been removed from education outputs entirely and healthcare training was removed from 2012. This is because these categories are not captured in higher education, the national accounts, or as aggregate further education.

We have introduced a student well-being quality adjustment, which replaces the previous bullying measure. Well-being is emerging as a policy focus in education and has been associated with academic achievement.

Data for this quality measure are taken from the Understanding Society's harmonised UK Household Longitudinal Survey (UKHLS) for children aged 10 to 15 years. It is based on students' responses about their happiness with "school" and "schoolwork".

We calculate the percentage of total positive responses to these two components, relative to negative and neutral responses, and use the resulting growth rates to inform the well-being index. Well-being is weighted alongside other quality parameters like attainment. This is weighted according to the percentage of expenditure allocated for addressing pupil deprivation, as declared in the DfE's National funding formula policy. This quality measure is introduced from 2003.

Further education (FE) attainment for England has been included as a new education quality adjustment measure, following the Public Service Productivity (PSP) Review. This is based on the percentage of students meeting the minimum requirement for Level 2 and Level 3 qualifications by 19 years of age. Level 2 and Level 3 attainment indices are weighted into a total FE attainment index, based on the percentage of qualifications achieved each year. This measure has been introduced from 2004.

We apply a "cohort-split" model to account for the cumulative nature of education, while processing attainment as part of quality adjustment. The pandemic violated the assumptions applied to the model, so methodological intervention was necessary. More information can be found in the Adjustments made in response to the coronavirus (COVID-19) pandemic subsection of Section 4: Quality summary.

Adult social care

Adult social care (ASC) did not receive any substantial methodological developments during the PSP Review. However, we reviewed and updated the choice of deflators used to determine the volume of goods and services for some expenditure transactions.

Children's social care

Children's social care (CSC) did not receive any substantial methodological development during the PSP Review. However, several improvements were also implemented for CSC inputs deflators to ensure consistency across PSP statistics.

We transitioned from using the index of labour costs per hour to the average labour compensation per hour deflators for labour. Data on this can found in our Labour costs and labour income, UK dataset.

Capital deflators were revised retrospectively because of changes in capital stocks data between Blue Book 2023 and Blue Book 2024. General information about this can be found in our UK National Accounts, The Blue Book: 2024.

Intermediate consumption deflators are now chain-linked. This replaces the previous method of weighting growth rates of different deflators. The expenditure weights underpinning intermediate consumption deflators have also been updated based on new data.

We have made improvements to the measure of CSC output. The representativeness of existing variables were enhanced and directly measured output now accounts for about 71% of the combined CSC output.

Social security administration

We have made several improvements for social security administration (SSA) outputs.

We have developed an experimental methodology to measure the transfer from legacy benefits to Universal Credit (UC). This allows for productivity growth to be measurable from 2018 onwards and replaces the previous inputs-equals-outputs approach. UC is integrated into the SSA output index by using a benefit-weighted index for the administration output of UC and legacy benefits. UC output is further adjusted for changes over time in the proportion of claims involving several main entitlements.

This approach was developed with support from the Department of Work and Pensions (DWP). It aims to overcome the limitations of using a conventional cost-weighted activity index (CWAI) for measuring productivity in the transition to UC and better accounts for UC claims' high degree of heterogeneity.

Because of data limitations, the new method is only introduced from 2016 onwards. However, updated data have been used to construct a CWAI that includes UC for 2013 to 2016, which was a relatively small component of SSA output in this time period.

Following the PSP review, SSA now includes quality adjustment for the first time. The quality adjustment is based on adjusting the output according to the "correctness" rate of administered benefits, which is informed by DWP fraud and error rates.

Fraud and error are classified into three categories:

  • official error, which includes erroneous errors made by the department

  • customer error, which includes genuine mistakes made by the claimant

  • customer fraud, which is deliberate fraud by the claimant

Benefits can be underpaid or overpaid in association with each category, though there are no underpayments for customer fraud. The gross overpayments and underpayments for total DWP benefits are published in their Fraud and error in the benefit system report. These data are used to derive a total fraud and error rate for a given year, and these are subtracted from 100% to give a "correctness" rate for that year. The growth rates in "correctness" rates inform quality-adjusted output.

Police and immigration

Several improvements have been implemented in the measure of police and immigration inputs. We improved the granularity of the direct labour police salary data. We have incorporated salary information for each police rank for the first time, including a split of uplift and non-uplift constables. This was previously available only at an aggregated level that combined police ranks.

We introduced direct labour information for Northern Ireland, meaning that all devolved countries are now included.

We improved the deflators used in our processing by incorporating bespoke deflators for intermediate consumption. The bespoke intermediate consumption deflator used detailed expenditure information for the police and immigration service area to tailor the deflator to the goods and services used across the time series.

We decided to expand the service area name (previously "police") for the 2021 estimates. This makes it clearer that the service area includes inputs for both police and immigration services.

The National Statistician's Independent Review of the Measurement of Public Services Productivity recommends that "police" and "immigration and citizenship" should be split from 2004 because these are two substantial policy areas and are distinct enough to each have their own focus.

Public order and safety

Public order and safety (POS) includes fire, courts (including magistrates' courts, county courts, the Crown Court, the Crown Prosecution Service, and legal aid), prisons, and probation.

We have made improvements to the measure of fire and rescue service inputs. We moved from indirect to direct measurement for local government labour by sourcing FTE data that we then matched to salary data from ASHE.

We improved intermediate consumption (IC) deflators by changing from a general implied deflator to a range of composite, bespoke deflators tailored for each of the service areas, where possible. This is for all areas within POS except fire, which is deflated using a headline Consumer Prices Index (CPI) deflator.

Improvements to the measurement of POS output have increased the granularity of cost-weighting used for the Crown Court, legal aid, prisons, and probation. Improvements to the output measure for the Crown Court means it now reflects differences in the hearing time taken for indictable-only and triable-either-way trials (with both groups further split by plea type), committals for sentencing, and appeals. More information is available in the Ministry of Justice's (MoJ's) Waiting and Hearing times in the Crown Court spreadsheet (XLSX, 9.99MB).

Legal aid output uses detailed activity and unit cost data across legal aid services in MoJ's Legal aid statistics.

The output for prisons now reflects differences in the unit costs for different categories of prison in England and Wales. Output for probation services accounts for differences in the cost of probationers who are on licence, relative to those serving community or suspended sentence orders.

We have also made further developed quality adjustments. This includes the introduction of more granular data for the court's timeliness quality adjustment. We have reintroduced the reoffending quality adjustment, after data had been held constant because of the pandemic. We have also reviewed the weighting for quality adjustments and where possible, have made improvements to these.

Defence

We have made improvements in the measure of defence inputs.

We transitioned from indirect to direct labour measurement. This is a methodological development based on existing available data. This improvement means we can produce a more accurate time series, because we can account for the skill mix of the workforce by including rank or grade and salaries for service and civilian personnel.

We also improved the deflators used for intermediate consumption and capital by transitioning from a general implied deflator for Defence to bespoke deflators for intermediate consumption and capital. The bespoke intermediate consumption deflator is a national accounts-consistent deflator that is derived and applied to defence intermediate consumption spending.

The bespoke capital deflator is the national accounts defence capital deflator that is applied to defence capital expenditure data. A cost-weighted Laspeyres volume index is then calculated for the volume of defence inputs, using expenditure shares, and assumed to equal the volume of Defence output.

Tax administration

Productivity estimates for tax administration were published for the first time in our Public service productivity: total, UK, 2022 article. The service area covers inputs and output for taxes administered by HM Revenue and Customs (HMRC) only, excluding customs. It does not include locally administered taxes, like Council Tax and business rates, or taxes collected by devolved governments.

The measure covers the period from 2018 to 2022. For the period 1997 to 2017, tax administration is still captured in the "other" government services grouping on the inputs-equals-outputs basis.

Tax administration inputs are measured indirectly and are based on estimates of expenditure spent on tax administration functions within HMRC. Input expenditure is adjusted in line with national accounts principles to maintain coherence and comparability with other PSP service areas. This allows tax administration to be removed from the "other" service area for the period 2018 to 2022 on a like-for-like basis.

Activity data is available to directly measure output for taxes, accounting for 89% of tax administration expenditure. The remaining output is calculated on the inputs-equals-outputs basis. We apply a "revenue adjustment" that adjusts the cost weights by the revenue raised per British pound of administrative cost for different taxes. This means efficiency improvements from changes in the number of tax payments made for low-cost taxes, relative to high-cost taxes, can be included in the measure. However, it does not address any other aspects of quality in tax collection, so the measure remains non-quality adjusted.

Comparing Office for National Statistics Tax Administration productivity and HM Revenue and Custom's "cost of collection" efficiency measures

The official statistics in development method of measuring tax Administration productivity developed by the PSP Review and used by the Office for National Statistics (ONS) is coherent and comparable with other PSP service areas. However, it differs notably from HMRC's preferred measure of performance – the "cost of collection" – which compares tax revenue with the cost of collecting it. 

HMRC have been using the cost of collection to measure value for money for over 10 years. This long time series is helpful for demonstrating efficiency savings over time. These estimates are published in HMRC's Annual Report and Accounts. The cost of collection is generally presented as the cost (in pence) per British pound of revenue collected. The cost of collection was 0.50 pence per pound collected in the financial year ending 2022.

In a scenario where tax rates go up, tax revenue increases, and the costs of collecting the tax stays the same, the cost of collection measure would decrease, reflecting efficiency gains. This would not necessarily be the case in the ONS productivity estimates, unless the increase in tax revenue was because of an increase in the number of taxpayers, rather than just the tax rate.

Both the HMRC cost of collection approach and the ONS productivity estimates can be framed through output and inputs, but reflect different concepts in their measurement. 

In HMRC cost of collection efficiency, inputs are measured using HMRC expenditure on tax collection, in current prices. Output is measured using total revenue collected, in current prices.

In the ONS tax administration productivity, inputs are measured using HMRC expenditure on tax collection, converted to volume terms (or deflated). Output is measured using revenue-adjusted activity. This means that the number of taxpayers, registered traders, or operators for each tax is weighted to reflect the average tax payment made. Figure 1 shows inputs and output for both measures, reflecting the differences outlined in this section.

In the ONS measure, there is greater variation over the period between the inputs and output indices. This results in a more volatile productivity index, compared with the cost of collection efficiency index where inputs and output indices follow very similar trends. The ONS and HMRC measures are shown in Figure 2.

Adjustments made in response to the coronavirus (COVID-19) pandemic

Several major adjustments were required to determine public sector productivity (PSP) statistics because of the coronavirus (COVID-19) pandemic. The coronavirus pandemic caused widespread disruption to the provision of public services. This led to an adjustment of the data sources and methods to better reflect output of public services during this period.

Coronavirus adjustment to education

There were repeated changes to schooling policies during the pandemic. The need to measure education output as consistently as possible required us to keep innovating, to ensure measurement keeps up with developments in schools.

We have reviewed and aligned our measurement approaches to provide consistent accounting for remote learning during 2020. This is in line with the changes to several policy regimes and the method of education output implemented in the UK National Accounts. This has reduced the extent to which remote learning was an effective substitute for in-person teaching at the start of the pandemic, compared with during autumn term 2020.

Quality-adjusted output for education also required intervention because of the lack of comparable attainment data with years before the pandemic. As a result of lockdown protocols and the disruption caused by the pandemic, conventional exam practices across the UK were cancelled and students' academic performances were determined by teacher-assessed grades. There were concerns about the choice of teacher-assessed grades as the quality adjustment measure, because of potential bias and comparability with pre-coronavirus pandemic attainment.

Alternative data for quality adjustment were needed to inform attainment for the academic years affected by the pandemic (2019/20 onwards). This was because historical attainment data were not published during the pandemic. There were also concerns around grade inflation following teacher-assessed grading. The National Reference Test (NRT) was identified as a robust corrective measure. The NRT captures similar attainment data in England (for mathematics and English only) and was not subject to the same bias effects during the pandemic. Similar metrics were not available for the devolved nations, so these data have been used to inform attainment from 2019/20 onward for all primary and secondary schools across the UK.

The "cohort-split" model we use to apportion attainment had to be revised to account for the pandemic. The model combines scores across cohorts to account for the cumulative effect of learning. This means that the attainment index for each year includes the individual attainment scores for all cohorts in a year at that respective level of education (for example, primary reception is year 6, secondary schools are year 7 to year 11, and further education is ages 16 to 19 years). These scores are informed by eventual attainment and are apportioned equally for all years, with revisions made as new data become available.

To avoid introducing pandemic-related scores into the back series, a five-year average was used to approximate attainment where data would have been extracted from the 2020/21 or 2021/22 academic years. A further fix was introduced to account for an offset in residual scores and ensure that the sum of all attainment across years matched observed attainment. This residual dampening was only applied to the 2020/21 and 2021/2022 years. The "normal" cohort model calculations will resume when the last cohort who attended school during the pandemic take their exams.

Further education attainment data is still inconsistent and cannot be compared with the series pre-pandemic. This is because alternative grading practices were put in place during this time. We have not identified suitable alternative data sources, so attainment indices have been held constant since 2018/19.

No data were available for the 2019/20 and 2020/21 academic years for the key stage 2 (KS2) disadvantaged gap index (DAG), which also contributes to the quality adjustment metric. The DAG index has been held constant over these periods, but is processed as normal from the 2021/22 academic year.

Coronavirus adjustment to healthcare

We included new healthcare output activities in 2020 to capture the volume of coronavirus-related testing, tracing, and vaccination output that was provided. This applied the same methods used to capture volume output in the UK national accounts. These services were established to manage and mitigate the impact of COVID-19. They represented a sizeable contribution to public service healthcare output in between 2020 and 2022. More information can be found in our Measuring the economic output of COVID-19 testing, tracing and vaccinations: April 2020 to June 2021 methodology.

Expenditure relating to goods and services to combat the coronavirus pandemic were reported in the Department of Health and Social Care's (DHSC) annual accounts for financial year ending (FYE) 2021 to FYE 2023. They are reported as goods and services in our measurement of healthcare inputs for 2020 to 2022. This includes the operational costs of NHS Test and Trace, personal protective equipment, and other equipment and consumables procured by DHSC.

Elements of this expenditure capture goods and services that were used across the UK. The total amounts reported in the DHSC annual accounts were split between England and the devolved nations, based on the latest population shares when data were processed in 2022.

Coronavirus adjustment to adult social care

The principle of the quality-adjustment procedure for adult social care (ASC) is to adjust output that accounts for how well clients' needs are being met within care settings, across domains like accommodation, safety, and dignity. Data relating to these domains are from the Personal Social Services Adult Social Care Survey (ASCS) conducted by the NHS. Up to 151 Councils with Adult Social Service Responsibilities (CASSR) usually participate, representing a sample size of roughly 60,000 clients.

However, participation in the survey was voluntary because of the coronavirus pandemic. Only 18 CASSRs (with a sample size of 6,695 clients) participated in FYE 2021. We have made two adjustments to address this issue, taking into consideration method differences, and to calculate our quality adjustment for community care, and residential and nursing care.

For quality adjustment for community care, we use the data provided for FYE 2021 (from 18 CASSRs) and compare it with the data from the same 18 CASSRs the previous year (FYE 2020) and following year (FYE 2022) to calculate growth rates. Each CASSR is weighted based on the information published in the ASCS. This gives us the most accurate representation of the change in quality, based on those CASSRs for which we have data in both periods.

Analysis of the 18 CASSRs over time showed that while scores showed slightly greater volatility than the whole sample, the overall trends in quality remained similar to the whole sample. Therefore, we use the subsample of 18 CASSRs for FYE 2021 as a reasonable proxy predictor of trends for the whole sample.

The residential and nursing care quality adjustment measures the change in adjusted social care-related quality of life, relative to a predicted change in adjusted social care-related quality of life. This predicted change is produced based on changes in the characteristics of individuals in the data. The scale of the effect of these changes determined by coefficients produced using a regression model.

These coefficients are usually updated annually. However, owing to the small sample size, they have not been recalculated by a model including FYE 2021. This is because the small sample from this year would introduce only a small number of additional observations to the calculation. Also, because these observations provide only partial geographical coverage, they may not be representative of England. Therefore, predicted scores for adjusted social care-related quality of life have been calculated for FYE 2021 using coefficients produced using data from FYE 2020.

Data are processed as normal from FYE 2022.  

Coronavirus adjustment to public order and safety

Several parameters are included in the quality-adjustment measure for public order and safety (POS). One parameter includes reoffending rates, published by the Ministry of Justice. The data show the proven reoffences that have occurred within the following year. In the context of assessing public service productivity in 2020, this means that proven reoffences for 2019 were followed up in 2020.

The levels of reoffending for these cohorts did fall substantially because of the coronavirus pandemic. This was because of the impact of lockdowns and delays in the rate at which reoffences were proven.

These data are inconsistent and incomparable with how data on reoffending was collected for years before the pandemic. In response to this, the reoffending rate for Quarter 4 (Oct to Dec) 2018 was kept constant for 2019 and Quarter 1 (Jan to Mar) 2022. Reoffending levels returned to historical limits in 2022, so we resumed the use of real-time data. We have used linear interpolation to inform reoffending estimates during the pandemic-affected period, between the last available unaffected data pre-pandemic (July to September 2018) and the point at which we resumed using real-time data (April to June 2022).

More information on adjustments to output measurements and how they compare internationally can be found in our International comparisons of the measurement of non-market output during the coronavirus (COVID-19) pandemic methodology.

Mapping links between Classification of the functions of government, service area, and department

Annual public service productivity (PSP) accredited official statistics are produced based on the United Nations Statistical Division's Classification of the functions of government (COFOG). COFOG aligns relevant data with specific service areas and industries. Data that feed into PSP estimates for each service area can be identified and processed on a COFOG basis.

However, these service areas do not precisely align to departmental boundaries. Instead, they reflect internationally recognised definitions, which give a consistent view through machinery of government changes. Productivity estimates are therefore not produced on a departmental level.

Based on the work we have done, we can link service areas with COFOG and departments. For example, the education service area is embedded in COFOG 9 and is linked to the Department for Education (England), the Northern Ireland Executive, the Scottish Government, and the Welsh Government.

The first output on the relationship between department and COFOG is available for 2020 in our COFOG by service area and government department for public service productivity dataset.

Central government expenditure data are sourced in current prices from HM Treasury's Online System for Central Accounting and Reporting (OSCAR) public spending database, which collects financial information from across the public sector.

Back to table of contents

6. Quality characteristics of the data

Relevance

Relevance is the degree to which the statistical product meets user needs for both coverage and content.

We have reviewed the measurement and reporting of public service productivity (PSP) in the UK. This quality and methodology information (QMI) has been updated to reflect the latest improvements introduced in the National Statistician's Independent Review of the Measurement of Public Services Productivity. These estimates are updated annually, and any methods changes are explained in our earlier papers and articles.

These latest improvements build on the work previously done by the UK Centre for the Measurement of Government Activity (UKCeMGA) (launched in 2005 and closed in 2016) to apply the recommendations of the Atkinson Review (PDF, 1.08MB).

We publish three statistical outputs as part of our Public service productivity: total, UK articles:

  • a volume index of total public services output and indices of output by service area

  • a volume index of total public services inputs and indices of inputs by service area

  • a derived index for total public services productivity and by service area (output per unit of inputs)

We have developed estimates of output, inputs, and productivity for different service areas. Service areas are based on the United Nations Statistical Division's Classification of the functions of government (COFOG). Table 1 shows a summary of which service areas include an input, quantity-based output, quality-adjusted output, and productivity measure. Overall, 60.1% of total public services have quantity-based productivity estimates and 52.2% have quality-adjusted productivity estimate.

Accuracy and reliability

Accuracy and reliability are the degree of closeness between an estimate and the true value.

We construct both the output and inputs series for each service area using a variety of administrative and national accounts data. The accuracy of the derived series depends on the accuracy of the source data. Unless we have introduced substantial methodological changes, the main source of revisions to each service area's productivity estimates will be changes in source data and expenditure weights.

There is no other source of PSP estimates that is comparable in methodology, so validating our results is difficult. We do this by publishing regular triangulation articles, as set out in the Atkinson Review.

It is difficult to provide a confidence interval around our estimates, given the multiple data sources on which the estimates are based. There will inevitably be some margin of error from a "true" measure of productivity, which is unknown. We collate triangulation evidence from other government departments and independent sources, which provides additional context to inform the interpretation of our PSP statistics.

We have sourced alternative data that capture output as accurately as possible, to make data adjustments in response to the coronavirus (COVID-19) pandemic. For example, the mechanisms that were put in place to capture remote learning, attendance, and learning loss for education provide the most accurate indication of how education services were delivered. Conversely, the reduced sample size for the quality adjustment measure for adult social care has likely generated a less accurate indication of how care needs were being met during this period. Users should be cautious when interpreting the overall reliability of the data, considering these were novel changes compared with our normal practices.

There were changes to the way mental health healthcare activity was captured in 2019 and 2020. We still estimate growth in the annual mental health output measure by using a selection of proxy activity indicators from data sources. These proxy indicators are also used to estimate output in the quarterly national accounts, as described in Section 2: Additions to healthcare coverage of our Improvements to healthcare volume output in the quarterly national accounts methodology.

Coherence and comparability

Coherence is the degree to which data that are derived from different sources or methods, but refer to the same topic, are similar. Comparability is the degree to which data can be compared over time and domain, for example, geographic level.

Assessing coherence of our total PSP data is difficult because there are currently no comparable measures that are published. We convert some source data from financial year to calendar year and aggregate results to a UK level. This makes comparisons at a country level difficult.

Service areas are also defined by COFOG, rather than by administrative department or devolved administration. Some direct comparisons between service areas should not be made. This is because of the different methodology developed for healthcare and education, and the inputs-equals-outputs treatment of several service areas (police, defence, and other).

Our estimates cover the UK and are based on data for England, Scotland, Wales, and Northern Ireland, where possible. Where data is not available for all four countries, the assumption is made that the available data are representative of the UK. This can happen for quality adjustment, output, or inputs data.

Healthcare data have not been available to calculate output volumes for Northern Ireland since 2020. The UK-level estimates for 2020, 2021, and 2022 are based on output growth in England, Scotland, and Wales only. The effect of this is minimal because Northern Ireland only accounts for a small proportion (3% in 2019) of the UK total. We intend to reincorporate Northern Ireland into our healthcare measure when we have two consecutive years of activity data from a new collection. This will mean we can reintroduce estimates for output growth.

In instances where the data are available for all four countries of the UK, there may be slight variations in definitions or reporting conventions. This can introduce additional, largely unquantifiable effects on our estimates.

We maintained a good degree of coherence when identifying data sources to serve as alternative measurements during the pandemic. Service areas such as healthcare, education, adult social care (ASC), and public order and safety (POS) largely used the same data during the pandemic. However, some adjustments and new data needed to be introduced so that the resultant estimates accounted for the influence of the pandemic. For example, new sources like Teacher Tapp were included in the education quantity output adjustments. The data were highly relevant to the topic to justify their inclusion.

However, though their inclusion was justified, some of the adjustments and new data may make some comparisons with pre-pandemic years difficult. For example, there was a reduced sample size in the quality adjustment measure for ASC. We used the National Reference Test to inform education attainment during the pandemic, which meant switching over from traditional attainment sources specific to each school phase and nation.

Comparisons between annual and quarterly estimates

We publish our annual PSP estimate, which are accredited official statistics. Alongside this, we also publish our official statistics in development in our Public service productivity, quarterly, UK bulletins. The quarterly series offers a timelier measure than the annual series, which has a two-year time lag.

However, compared with the annual estimates, our quarterly estimates do not include:

  • quality adjustments on a quarterly basis

  • granular data on activity and unit costs

  • full breakdowns by COFOG

Quarterly estimates are split into seven categories on a Standard Industrial Classification (SIC) basis, with some forecasting. They are then aggregated to total public service productivity. These categories are:

  • healthcare

  • education

  • social protection

  • justice and fire

  • military defence

  • central government services

  • local government services

Inputs for the quarterly estimates use our current price expenditure and appropriate deflators to derive volume estimates of inputs. For more recent quarters, labour inputs use full-time equivalent (FTE) data derived from our public sector employment estimates, and specifically for health, deflated expenditure data on NHS bank staff. The output estimates only account for the volume of activity, not the quality of output. They use our non-seasonally adjusted chained volume measures (CVM).

Expenditure and CVM data are consistent with our non-seasonally adjusted quarterly national accounts (QNA) data. This can lead to further differences with our annual PSP estimates. This is because the annual estimates can implement methods changes before QNA estimates, which are governed by the Blue Book process in the UK national accounts.

We published the quarterly estimates of healthcare inputs, output, and productivity for the first time in February 2025, alongside the estimates of total public service productivity, inputs, and output.

Healthcare is the first service area to be included in our release. This is because it is the largest public service area by share of expenditure and so it is often the main reason for overall PSP movements. We are working to enhance our productivity estimates at the service-area level to enable future releases.

To provide more timely estimates of annual productivity (for total productivity and healthcare) we include quarterly annualised growth rate (QAGR) estimates in our quarterly publication. The QAGR method uses the growth rate in annualised quarterly PSP estimates to produce nowcast annual estimates.

Accessibility and clarity

Accessibility is the ease with which users can access the data, also reflecting the format in which the data are available and the availability of supporting information. Clarity refers to the quality and sufficiency of the release details, illustrations and accompanying advice.

Our recommended format for accessible content is a combination of HTML webpages for commentary, charts, and graphs, with data provided in usable formats like CSV and Excel. We also offer users the option to download our commentary in PDF format. In some instances, other software may be used or may be available on request. The datasets associated with this release have been modified in accordance with the accessibility legislation and are available to users within our terms and conditions (for data on the website).

Notification of changes in methodology are published in the public service productivity topic-specific methodology page. Historic changes are available in the guidance and methodology area of our archive website.

Timeliness and punctuality

Timeliness refers to the lapse of time between publication and the period to which the data refer. Punctuality refers to the gap between planned and actual publication dates.

We publish estimates of output, inputs, and productivity in the total public sector on a calendar-year basis. We generally refer to the period "t minus 2", with t being the current year of publication. If the reference period were to be moved, for example to "t minus 1", there would be a notable increase in the use of estimation to fill data gaps in the productivity articles, before publication of these datasets.

For more details on related releases, the GOV.UK release calendar provides 12 months' advance notice of release dates. In the unlikely event of a change to the pre-announced release schedule, we will inform users about the change and the reasons for the change, as set out in the Code of Practice for Official Statistics.

Concepts and definitions

Concepts and definitions describe the legislation governing the output and a description of the classifications used in the output.

Our analysis of productivity in UK public services represents internationally pioneering work. Measurement of outputs follows the guidance in the System of National Accounts (SNA) 1993 and subsequent SNA 2008, as well as the European System of Accounts (ESA) 1995 and subsequent ESA 2010.

Measurement of outputs (including the need to measure the change in quality), inputs, and productivity follows the principles in the Atkinson Review. The estimates discussed are for service areas classified by COFOG.

Geography

Estimates are published on a UK geographic basis, with no further geographic breakdown provided. This is unchanged from last year's publication.

Output quality

Our total public service productivity estimate is an accredited official statistic. It measures total productivity and the productivity of nine service areas. It offers a comprehensive coverage of the data required by the users. Our estimates for the tax administration and social security administration service areas are currently badged as official statistics in development, because of ongoing major methodological development.

Why you can trust our data

Our total UK public service productivity statistic is produced in accordance with the best practices set out in the Statistics Authority's Code of Practice for Statistics and our Data policies.

Any revisions to the data are clearly identified and limitations are explained to all users.

Recent and future improvements

As part of the Public Service Productivity Review, we have made recommendations for improving the measurement of public service productivity estimates. A total of 120 recommendations, of which 8 are measurement principles. Of the remaining 112 recommendations, 87 are still to be completed.

Table 2 summarises the number of recommendations by service area still to be completed. This table will be updated in future updates of this quality and methodology information (QMI).

Back to table of contents

7. Methods used to produce the public service productivity: total, UK: 2020 data

Main data sources

We use a range of data sources to provide a comprehensive picture of UK public services. We provide a summary of these data sources in our Public service productivity estimates: sources and methods methodology.

How we process the data

The following section outlines the main statistical methods used to compile estimates of public service inputs, output, and productivity. A more detailed explanation of the methods used is given in our Public service productivity estimates: sources and methods methodology. We publish notable methods changes advance on the topic-specific methodology page to inform users of the nature and likely impact of changes.

Measuring output

The methods of measuring output vary between and within service areas. This section provides a breakdown of methods of measuring output, by output measure, including definition, service areas, and their coverage percentages.

The expenditure shares among service areas in 2022 were:

  • healthcare (39.8%)

  • "other" government services, including general government services, economic affairs except tax administration, environmental protection, housing, recreation, and other public order and safety (16.4%)

  • education (16.0%)

  • defence (9.1%)

  • adult social care (5.4%)

  • police and immigration (5.1%)

  • public order and safety (3.1%)

  • children's social care (2.7%)

  • social security administration (1.6%)

  • tax administration (0.7%)

Quantity output measure

The quantity output measure is the number of activities performed and services delivered. Growth values in individual activities are weighted together using the relative cost of delivery, except for tax administration activities, which are revenue adjusted.

The percentages of service areas with quantity output measures only are:

  • tax administration (89%)

  • public order and safety (35%)

  • healthcare (18%)

  • education (15%)

  • children's social care (9%)

Quality-adjusted output measure

Quality-adjusted output measure is when the quantity output is adjusted for the quality of the services delivered. If the quality adjustment is positive, estimates of output growth will increase.

The percentages of service areas that are quality adjusted are:

  • adult social care (100%)

  • social security administration (100%)

  • education (85%)

  • healthcare (71%)

  • public order and safety (65%)

  • children's social care (62%)

Inputs-equals-outputs

For some services, we cannot measure output directly. We assume the volume of output equals the volume of inputs used to create them, meaning that productivity growth will always be zero.

The percentages of service areas that are inputs-equals-outputs are:

  • police and immigration (100%)

  • defence (100%)

  • other government services (100%)

  • adult social care (66%)

  • children's social care (29%)

  • healthcare (12%)

  • tax administration (11%)

The output measures used are based on or taken in chained volume from the UK National Accounts, The Blue Book. Most public services are supplied free of charge or at cost price, so they are considered non-market output. The output of most services is measured by the activities and services delivered. These are usually referred to as "direct output" measures. These activities are measured and aggregated into a single volume output, according to their relative cost or share of service area expenditure. This is referred to as a cost-weighted activity index (CWAI).

For "collective services" (those that are not provided to an individual, such as defence), it is difficult to define and measure the nature of their output because the services have complex features. We assume that for collective services, the volume of output is equal to the volume of inputs used to create them. This is referred to as the "inputs-equals-outputs" convention.

We also apply a quality adjustment factor to the volume of activity index of several service areas. The purpose of this is to reflect the extent to which services succeed in delivering their intended outcomes and the extent to which services are responsive to users' needs. This results in estimates that differ from those used in the national accounts.

There are currently seven service areas that include quality adjustment factors, as listed in the Quality-adjusted output measure subsection.

Healthcare

The healthcare productivity quality adjustment is a compound measure made up of five components, including:

  • short-term post-operative survival rates

  • estimated health gain from procedures

  • waiting times

  • primary care outcomes achievement under the Quality and Outcomes Framework

  • National Patient Surveys scores

This quality adjustment process is applied from 2001 onwards. In the national accounts series, no quality adjustment is currently applied to healthcare output.

Further detail can be found in our Source and Methods Public Service Productivity Estimates: Healthcare methodology (PDF, 329KB) and our Public service productivity estimates: healthcare QMI.

Education

The education productivity is quality adjusted using three components, including:

  • attainment for primary school, secondary school, and further education

  • disadvantaged attainment gap (DAG) index at key stage 2

  • student well-being

Further detail can be found in our Public service productivity estimates: sources and methods methodology.

Public order and safety

Quality adjustments are applied to the criminal justice system elements of public order and safety output. This includes output associated with Crown Courts, magistrates' courts, criminal legal aid, Crown Prosecution Service, prison, and probation services.

There are two main sections included. The first adjusts the whole series by a severity-adjusted measure of total reoffences per offender. The second looks more closely at the different service areas. This adjustment includes escapes from and safety inside the prisons and uses the number of incidents and their severity. It also uses the timeliness of courts to process cases passed on to them by police.

An adjustment has been made for the recidivism indicator. This is because the data on reoffending for the Quarter 4 (Oct to Dec) 2018 and all of 2019 were affected by the coronavirus (COVID-19) pandemic. The adjustment was also applied to the 2020 data. More information about this can be found in Section 5: Public order and safety in our Public service productivity estimates: sources and methods methodology and in our Quality adjustment of public service public order and safety output: current method.

Adult social care

Quality adjustment in adult social care was introduced to apply the concept of adjusted social care-related quality of life and to include data from the Adult Social Care Survey. To assess how well their needs are met, respondents are asked to rank how well their care needs are met in eight domains, such as food and nutrition, accommodation, and safety. Each level of response is weighted by importance to quality of life, using weights derived from another survey of community care users.

The quality adjustment is produced separately for working age adults with learning disabilities, other working age adults, and older adults in residential and nursing care, and community care. The final six components are then weighted together using the same measure of public expenditure as used in the inputs and output. The quality-adjusted output is obtained from the rate of change in the aggregate quality adjustment for each year and then applied to the corresponding year of the output index. More information on the methodological developments can be found in our Public service productivity: adult social care QMI.

Social security administration

The quality adjustment in SSA reflects the rate of "correctness" of benefits administration. This is informed by the Department of Work and Pensions (DWP) fraud and error rates. We determine the fraud and error rates for all benefits administered by DWP, including overpayments and underpayments. These rates are then used to inform a correctness rate for benefits administration over time. We use correctness growth rates to adjust output, according to the incidence of fraud and error in the benefits system. More information can be found in our Public service productivity estimates: sources and methods methodology.

Children's social care

We apply quality adjustment to each area of children's social care activity. This includes safeguarding, non-secure accommodation, secure accommodation, adoptions, special guardianships, and care leavers. For safeguarding and care leavers for England, each have two indicators of quality that need to be combined into a single index of safeguarding quality and care leavers quality. An equal weight is attributed to the suitable accommodation measure and the not in employment, education or training (NEET) measure for care leavers. For safeguarding, weights for rereferrals and reregistrations correspond to the percentage of safeguarding expenditure on children in need and children on child protection plans (84% and 16% in 2019, respectively).

A chain-linked Laspeyres volume index of quality-adjusted output is produced for safeguarding, care leavers, and secure and non-secure accommodation by country. No quality adjustment is applied to adoptions or special guardianship orders; these are chain-linked Laspeyres volume indices.

Measuring inputs

The input measures we use are based on or taken from a mix of expenditure, deflator, and administrative data sources. They include compensation of employees, intermediate consumption, and consumption of fixed capital of each service by central government and local government.

Central government expenditure data are sourced in current prices from HM Treasury's Online System for Central Accounting and Reporting (OSCAR) public spending database, which collects financial information from across the public sector. Annual estimates are derived from monthly profiles of spending for the current financial year and modified to meet national accounts requirements.

Most local government expenditure data are sourced from financial year returns by local authorities, apportioned across calendar years.

Expenditure data are then adjusted for price changes (deflated) using a suitable deflator (price index). The is done to measure input volumes indirectly.

For several inputs (most healthcare and education labour inputs in particular), volume series are measured directly using administrative data sources of full-time equivalent staff numbers from NHS staff resources.

Deflator or price indices

Where possible, a suitable deflator (price index) or composite deflator is applied to each current price expenditure to estimate a volume series. Deflators are applied separately for each factor and that the price indices should be specific for each service, as recommended by the Atkinson Review (PDF, 1.08MB). Price indices for labour and procurements should be sufficiently disaggregated to allow for changes in the compositions of the inputs. Currently, we take deflators from a range of different sources to best represent changes in prices for each service input. We may use a generic price index like our Consumer Price Index: all items if suitable data are unavailable.

These series are aggregated to form an overall estimate of the volume of inputs that are used to provide each of the public services within the total public services.

Further detail can be found in the Public service productivity estimates: sources and methods methodology.

Aggregating service area inputs and output

The expenditure shares of each public service component are calculated using a breakdown of general government final consumption expenditure (GGFCE) by COFOG.

Aggregating output

We produce estimates of total public sector output by weighting and then aggregating the volume of output in each service area. The weights used in this process are the service area COFOG expenditure weights, which are then applied to form a Laspeyres volume index of total public service output.

This method follows the formula:

Where wit-1 is the value share of item i in the base period t-1, and Rlt-1,t is the volume relative (the ratio of the quantity of an activity to the quantity of the same activity in the base period).

In the context of public service output the weights (wi) are indicative of the relative value of different activities (qi). Unit costs can be used to approximate the "price" of an activity (pi) given the difficulty of accurately estimating the relative social and economic value of different activities. However, in practice, expenditure shares from public finance data are generally used to approximate relative value (wi) of activities. The weights for different activities are those taken from the first year of each activity pair (the base year t-1).

For example, if we were combining activity series for each constituent parts of the UK for 2010, we would weight each of the activity growths from 2009 to 2010 for England, Scotland, Wales, or Northern Ireland by their respective expenditure shares in 2009.

Aggregating inputs

Estimates of total public sector inputs are produced in a similar manner. This involves weighting and then aggregating the volume of inputs in each service area, using the same COFOG expenditure weights as in the calculation of aggregate output. This produces a Laspeyres volume index of inputs for total public services.

Measuring productivity

We calculate estimates of total public sector productivity using the aggregate output and inputs indices produced using the approach already discussed.

Including the police, defence, and other government services in the calculation of productivity will limit the growth in total public service productivity. This pushes estimates of productivity growth towards zero. The extent to which they affect the growth of total public service productivity is proportional to their share of total expenditure.

During periods when productivity in other sectors is positive, the "inputs-equals-outputs" convention will reduce productivity growth. During periods when productivity in other sectors is negative, the inclusion of the police, defence, and other sectors will tend to raise productivity growth estimates.

How we analyse and interpret the data

We calculate the contributions of each service area to total growth in output, inputs, and productivity and the levels of revisions. These different findings are shown in a series of charts for stakeholders within the Office for National Statistics (ONS), along with the reasons behind changes in the figures.

The data are then published for use by various external stakeholders. Our external stakeholders are welcome to provide feedback, show us how they use the statistic, and provide guidance on where we should focus our future work in public service productivity.

How we quality assure and validate the data

We follow several procedures to quality assure the data. These processes are applied at all stages of the production process, at both granular and aggregate levels.

We carry out internal quality assurance at all of the main processing stages. This includes discussions with data suppliers and experts in the field during the processing and analyses. We parallel run two aggregation systems (by service area and by component) to check the accuracy of the data and the processing system simultaneously. When results are finalised, they are presented to the team, ONS experts in this field, and senior leadership teams.

External quality assurance follows the Code of Practice and our policy for accessing unpublished and sensitive information. We have established a list of representatives from government departments and academics to provide ad hoc quality assurance of our estimates before they become public.

We create visual presentations from the processed data. These presentations are used for internal analysis and curiosity sessions, to highlight notable data points or patterns that may warrant further investigation.

How we disseminate the data

We publish our Public service productivity: total, UK articles free of charge once a year in the Public services productivity section of our website. Supporting documents are clearly linked and accessible to users. Additional data can be provided on request.

How we review and maintain the data processes

Further revisions to the estimates may be required, for example, because of changes to source data. This follows our Revisions policy for economic statistics and our Revisions policy and correction of errors policy.

Back to table of contents

8. Other information

Assessment of user needs and perceptions

Our productivity releases have a range of users. We have developed two main ways to gather information on users and uses of our public service productivity estimates.

We hold regular user consultation and stakeholder engagement meetings on healthcare, education, and adult social care. These meetings allow for the exchange of information on data sources, development issues, and method changes that affect our public service productivity estimates.

We also circulate a user feedback questionnaire to those users who make enquiries about public service productivity.

For further information, please email the Public Service Productivity Review team at psp.review@ons.gov.uk.

Back to table of contents

10. Cite this methodology

Office for National Statistics (ONS), released 17 November 2023, ONS website, methodology, Public service productivity: total, UK QMI

Back to table of contents