Appendix C: The Executive Opinion Survey: The Voice of the Business Community
For almost 40 years, The Global Competitiveness Report has been used by policymakers, business executives, and academics as a tool that contributes a valuable portrait of an economy’s productivity and its ability to achieve sustained levels of prosperity and growth. Central to the Report’s index, the Executive Opinion Survey (the Survey) is the longest-running and most extensive survey of its kind, capturing the opinions of business leaders around the world on a broad range of topics for which statistics are unreliable, outdated, or nonexistent for many countries. Thus the Survey aims to measure critical concepts—such as appetite for entrepreneurship, the extent of the skills gap, and the incidence of corruption—to complement the traditional sources of statistics and provide a more accurate assessment of the business environment and, more broadly, of the many drivers of economic development.
The indicators derived from the Survey are used in the calculation of the Global Competitiveness Index (GCI) as well as a number of other World Economic Forum indexes, such as the Networked Readiness Index, the Enabling Trade Index, the Travel & Tourism Competitiveness Index, the Gender Gap Index, and the Human Capital Index as well as several other reports, including The Inclusive Economic Growth and Development Report and a number of regional competitiveness studies. A truly unique source of data, the Survey has also long been used by a number of international and nongovernmental organizations, think tanks, and academia for empirical and policy work.
The Survey 2017 in numbers
The 2017 edition captured the views of 14,375 business executives in over 148 economies between February and June 2017 (see Figure 1). Following the data editing process described below, a total of 12,775 responses from 133 economies were retained. The 2017 edition of the Survey was made available in 39 languages (see Table 1).
Survey structure, administration, and methodology
The Survey comprises 150 questions divided into 15 sections. Most ask respondents to evaluate an aspect of their operating environment, on a scale of 1 (the worst possible situation) to 7 (the best). The 2017 edition of the Survey instrument is available in the Downloads section of the Global Competitiveness Report’s page at http://gcr.weforum.org/.
The administration of the Survey is centralized by the World Economic Forum and conducted at the national level by the Forum’s network of Partner Institutes. Partner Institutes are recognized research or academic institutes, business organizations, national competitiveness councils, or other established professional entities and, in some cases, survey consultancies. These institutes have the network to reach out to the business community, are reputable organizations, and have a firm commitment to improving the competitiveness conditions of their economies. (For the full list, see the Acknowledgments section of this Report.)
In administering the Survey and in order to gather the strongest dataset, Partner Institutes are asked to follow detailed sampling guidelines to ensure that the sample of respondents is the most representative possible and comparable across the globe and in a specific timeframe. The sampling guidelines are based on best practices in the field of survey administration and on discussions with survey experts.
The Survey sampling guidelines specify that the Partner Institute build a “sample frame” (Figure 2)—that is, a list of potential business executives from micro companies, small and medium-sized enterprises, and large companies—from the various sectors of activity, as detailed below. Specifically, the Partner Institutes are asked to carry out the following steps:
- Prepare a “sample frame,” or large list of potential respondents, which includes firms in proportion to the share of GDP accounted for by the sector they represent: agriculture, manufacturing industry, non-manufacturing industry (mining and quarrying, electricity, gas and water supply, construction), and services.
- Separate the frame into three lists: micro companies (< 10 employees), small and medium-sized enterprises (11–250 employees), and large firms (> 251 employees), again in proportion to the overall representation of these companies in the economy, and with each list ranging over all the sectors.
- Ensure that the list of chosen companies also represents a good geographical coverage.
- To reduce bias, randomly select firms from these lists to receive the survey.
The Survey is administered in a variety of formats, including face-to-face or telephone interviews with business executives, mailed paper forms, and online surveys. For energy, time, and cost considerations, the Forum encourages the use of a dedicated online survey tool provided to the Partner Institutes.
The Partner Institutes also play an active and essential role in disseminating the findings of The Global Competitiveness Report and other reports published by the World Economic Forum by holding press events and workshops to highlight the results at the national level to the business community, the public sector, and other stakeholders.
Data treatment and score computation
This section details the process whereby individual responses are edited and aggregated in order to produce the scores of each economy on each individual question of the Survey. These results, together with other indicators obtained from other sources, feed into the GCI and other research projects.
Prior to aggregation, the respondent-level data are subjected to a careful editing process. A first series of tests is run to identify and exclude those surveys whose patterns of answers demonstrate a lack of sufficient focus on the part of the respondents. Surveys with at least 80 percent of the same answers are excluded. Surveys with a completion rate inferior to 50 percent are also excluded. The very few cases of duplicate surveys—which can occur, for example, when a survey is both completed online and mailed in—are also excluded in this phase.
In a second step, a multivariate test is applied to the data using the Mahalanobis distance method. This test estimates the probability that an individual survey in a specific country “belongs” to the sample of that country by comparing the pattern of answers of that survey against the average pattern of answers in the country sample.[FN 1]For a more detailed formal description of the various tests presented here, see Browne et al. 2016.[/FN]
A univariate outlier test is then applied at the country level for each question of each survey. We use the standardized score—or “z-score”—method, which indicates by how many standard deviations any one individual answer deviates from the mean of the country sample. Individual answers with a standardized score greater than 3 are dropped.
Aggregation and computation of country averages
We use a simple average to compute scores at the economy level. That is, for a given question, all individual answers carry the same weight.
Formally, the country average of a Survey indicator i for country c, denoted qi,c , is computed as follows:
qi,c,j is the answer to question i in country c from respondent j; and
Ni,c is the number of respondents to question i in country c.
Once responses have been aggregated at the country level, a test to detect statistically excessive perception bias is run. We leverage the strong relationship between the indicators derived from the Survey on one hand and other statistical indicators used in the Global Competitiveness Index on the other hand. A linear regression is used to predict the average score in Survey indicators from the average performance in the other indicators. Survey scores that lie outside the 95 percent confidence interval around the predicted values are automatically corrected by a factor derived from the difference between the observed value and the limit of the confidence interval.
Finally, an analysis to assess the reliability and consistency of the Survey data over time is carried out. As part of this analysis, an inter-quartile range test, or IQR test, is performed to identify large swings—positive and negative—in the results. More specifically, for each country we compute the year-on-year difference, d, in the average score of a core set of 66 Survey questions. We then compute the inter-quartile range (i.e., the difference between the 25th percentile and the 75th percentile). Any value d lying outside the range bounded by the 25th percentile minus 1.5 times the IQR and the 75th percentile plus 1.5 times the IQR is identified as a potential outlier. This test is complemented by a series of additional empirical tests, including an analysis of five-year trends and a comparison of changes in the Survey results with changes in other indicators capturing similar concepts. We interview local experts and consider the latest developments in a country in order to assess the plausibility of the Survey results. Based on the result of this test and additional qualitative analysis, and in light of the developments in these respective countries, the data collected in 2017 in Bahrain, Oman, Tajikistan, and Turkey were not used. In those cases, the Survey results from the previous edition are used instead (see Exceptions in Box 1).
Moving average and computation of country scores
We then proceed to compute moving averages. The moving average technique consists of taking a weighted average of the most recent year’s Survey results together with a discounted average of the previous year. There are several reasons for doing this. First, it makes results less sensitive to the specific point in time when the Survey is administered. Second, it increases the amount of available information by providing a larger sample size. Additionally, because the Survey is carried out during the first quarter of the year, the average of the responses in the first quarter of 2016 and the first quarter of 2017 better aligns the Survey data with many of the data indicators from sources other than the Survey, which are often year-average data.
To calculate the moving average, we use a weighting scheme composed of two overlapping elements. On one hand, we want to give each response an equal weight and, therefore, place more weight on the year with the larger sample size. At the same time, we would like to give more weight to the most recent responses because they contain more updated information. That is, we also “discount the past.” Table 2 reports the exact weights used in the computation of the scores of each country, while Box 1 details the methodology and provides a clarifying example.
Box 1: Country score calculation
This box presents the method applied to compute the scores for the vast majority of economies included in The Global Competitiveness Report 2017–2018 (see text for exceptions).
For any given Survey question i, country c’s final score
is given by:
is country c’s score on question i in year t, with t = 2016, 2017, as computed following the
approach described in the text; and
is the weight applied to country c’s score in year t (see below).
The weights for each year are determined as follows:
(2a) and (2b)
where is the sample size (i.e., the number of respondents) for country c in year t, with t = 2016, 2017. a is a discount factor. Its value is set at 0.6. That is, the 2016 score of country c is given 2/3 of the weight given to the 2017 score.
Plugging Equations (2a) and (2b) into (1) and rearranging yields:
In Equation (3), the first component of the weighting scheme is the discounted-past weighted average. The second component is the sample-size weighted average. The two components are given half-weight each. One additional characteristic of this approach is that it prevents a country sample that is much larger in one year from overwhelming the smaller sample from the other year.
As noted in the text, there are a number of exceptions to the approach described above. In illustrating them below, we use actual years—rather than letters—in equations for the sake of concreteness.
In the case of Survey questions that were introduced in 2017, where, by definition, no past data exist, full weight is given to the 2017 score. For newly covered economies, this treatment applied to all questions.
For countries whose 2017 data were discarded, the results from the previous editions of the Report are used instead. In some countries, with a small sample of respondents, for some questions the number of answers is below the threshold of 30. In this case, the same treatment applies. Formally, we have: qi,c2016, 2017 qi,c2015, 2016 = wc2015 3 qi,c2015 1 wc2016 3 qi,c2016.
Example of score computation
For this example, we compute the score of Uruguay for the indicator Burden of government regulation, which is included in the Global Competitiveness Index (indicator 1.09) and derived from the following Survey question: “In your country, how burdensome is it for companies to comply with public administration’s requirements (e.g., permits, regulations, reporting)? [1 = extremely burdensome; 7 = not burdensome at all].” This question is not a new Survey question and therefore the normal treatment applies, using Equation (1). Uruguay’s Survey score was 3.03 in 2016 and 2.72 in 2017. The weighting scheme described above indicates how the two scores are combined. In Uruguay, the size of the sample was 89 in 2016 and 71 in 2017. Using a = 0.6 and applying Equations (2a) and (2b) yields weights of 47.8 percent for 2016 and 52.2 percent for 2017 (see Table 2). The final country score for this question is therefore:
This is the final score used in the computation of the GCI. Although numbers are rounded to two decimal places in this example and to one decimal place in the Uruguay country profile, exact figures are used in all calculations.
Browne, C., A. Di Batista, T. Geiger, and S. Verin. 2016. “The Executive Opinion Survey: The Voice of the Business Community.” The Global Competitiveness Report 2016–2017. Geneva: World Economic Forum.