Does Auditor Industry Specialization Increase Analysts’ Forecast Accuracy? Evidence from the Listed Firms of Australia

This paper examines the relation between auditor industry specialization and analysts’ beginning-of-the-year earnings forecast accuracy. It predicts that the higher industry specialization of the auditors will improve the quality of external financial reports and thus mitigates the analysts’ forecast error. It also predicts that higher audit quality will have a negative association with analyst forecast dispersion. The empirical test results on Australian listed firms from the year 2003 to 2012 does not find evidence of association between audit firm industry specialization and analysts’ beginning-of the year earnings forecast error. However, firms with higher analysts forecast error is associated with lower forecast dispersion among analysts, which is consistent with the prediction that analysts are consistent with predicting future earnings and analysts possess similar traits in terms of difference with the actual earnings. Additional analysis also finds that’s larger firms have less forecast errors compared to smaller firms. The findings contribute to the growing literature on auditing and financial reporting quality in Australian context.


Introduction
A high-quality auditor is believed to reduce information asymmetry and increase the quality of financial reporting. According to the FASB Conceptual Framework, the general purpose of external financial reporting is to increase the usefulness of published financial reports to its users by decreasing the information asymmetry between the preparer and user of financial reports. Primary users, for example, potential investors, creditors, lenders, or advisors, need this information for the prediction of future performance of the firm. Analysts need published financial reports to predict the future performance of the firm. Thus it is logical to argue that high-quality financial reports should improve the quality of earnings forecast. This paper examines the relationship between auditor industry specialization, a common proxy for audit quality, and analysts' earnings forecast. The analysis is performed on Australian listed firms between 2003 and 2012 and uses auditor industry market share, and auditor portfolio share both to determine the auditor industry specialization.
This paper is motivated to examine whether higher audit quality improves the quality of external financial reports. Prior studies in this area focus on the accuracy of analysts' end-of-year forecast (short-horizon), which is subject to the competing effect of audit quality. Payne (2008) argue that higher audit quality constrains managers incentives for earnings management and thus, financial statement audited by specialist auditor increases the level of analysts forecast error. This argument does not consider the effect of audit quality on analysts' forecast errors. Behn et al. (2008) focus on the accuracy of analysts' end-year forecast but find no evidence that Big-N auditor specialization increases the accuracy of analysts' short-horizon forecasts. My paper argues that the effect of auditor quality in financial statement quality can best be observed taking the long-horizon (beginning of month forecast) rather than short-horizon because there is no clean incentive for clients to manage earnings to meet or beat analysts' forecasts. So this paper argues that if audit quality increases the quality of external financial reports, then it should have a strong association with analysts' long-horizon forecast accuracy.
In addition, prior studies report an association of auditor industry specialization and forecast accuracy in US firms and this study aims to look into the relationship in the Australian context. The Australian experience with BIG 4 audit firms is similar to that of the US. Globally the dominance of the BIG N audit firm has increased over the year. In Australia, around 90 percent of the revenue comes from BIG 4 audit firms 1 . Similar to the US experience of big corporate scandals, Australia also contributed to some big failure (HIH Insurance Group, Harris Scarfe etc.), which put on huge pressure on the Australian firms. As a part of post-crisis regulatory reform, the Australian Securities and Exchange Commission announced strict reform in part of audit quality improvement. CLERP 9 reform in 2002 brought major change to improve Australian audit quality. Further, the adoption of the International Financial Reporting Standard (IFRS) in 2005 also put a challenge to the audit firms to increase the perceived reliability of financial reporting. In this study, I also take an additional test considering the effect of IFRS.
Findings suggest that under the Australian context, analysts' forecast error does not have any significant association with auditor industry specialization. Again, this paper also does not find any significant relation with analyst forecast dispersion and industry specialization. One possible reason might be the different industry distribution of Australian firms. This paper suggests further study into developing a better proxy for industry-specialization applicable for Australian firms.
This paper look at the association of audit specialization and long-horizon analysts' earnings forecast accuracy in Australian listed firms. Wu and Wilson (2013) do a similar study on USA based firms so this paper would add to the auditing and financial reporting quality literature in the Australian context.

Theory and Hypothesis Development
A growing literature links audit quality with financial reporting quality. An independent and quality audit provides the necessary assurance, through external checks, about the integrity of the reported earnings. Auditors can negotiate with clients regarding their proposed accrual decisions, and sometimes the client changes their initial estimate of accruals after the auditor pressure them. Audit quality is defined as the probability that a given auditor will detect material misstatements in a client's financial reports and subsequently report the said discrepancies (DeAngelo, 1981). According to Knechel et al. (2013), audit quality depends on all the stakeholders in the financial reporting process -who may have different expectations and views regarding audit quality. Through the audited report, the auditor tries to reduce the unobservable and uncertain risk.
According to UK Financial Reporting Council given Audit Quality Framework, audit quality if affected by audit culture, kill and personal qualities of partners and staff, the effectiveness of audit process, reliability, and usefulness of audit reporting and some factors outside the control of auditor. One of the major advantages an audit firm may enjoy arises from increasing audit efficiency and the quality of audit services. As the audit firm employs all its resources and technologies on a specific industry that leads to economies of scale, specialized audit firms can offer higher quality audits at a lower price as compared to their non-specialized competitors. According to Kwon (1996), specialized audit firms can enhance audit quality by being able to assess better their client's estimates and financial representations, which allows for a reduction in client's discretion while applying accounting principles. Balsam, Krishnan & Yang (2003) provide evidence that auditor industry specialization, a proxy for audit quality, has a positive association with the quality earnings quality. Balsam et al. (2003) find that firms audited by industry specialist auditors have higher earnings quality than firms with a non-specialist auditor. Superior earnings quality is argued to enhance the quality of accruals. Balsam et al. (2003) used earnings quality as a proxy for financial reporting quality. Owhoso, Messier & Lynch (2002), in their work, to examine specialized auditors' ability to detect industry-specific errors find that compared to the non-specialized team, an auditor with specialized knowledge and skill are better able to detect errors within their industry specialization that outside their specialization. Gramling et al. (2001), in their archival research, also show that earnings of firms with specialist auditors predict future cash flow more accurately than those of non-specialist auditors. Abarbanell & Lehavy (2003) attempting to explain the role of reported earnings in explaining bias in analysts' forecast, identify an empirical link between firms' recognition of unexpected accruals and forecast errors. Their finding suggests that firms' reporting choices play an important role in determining analysts' forecast errors. Behn et al. (2008), under the assumption of association between audit quality and financial reporting quality, also take the accuracy of analyst forecasts as a proxy for future earnings predictability.
Many of the prior empirical studies test the association between auditor industry specializations and analyst forecast accuracy. Payne (2008) takes a short-horizon forecast and investigates how an audit firm's quality influences analysts' forecast errors. He examines the accuracy of the analysts' forecast before the release of firms' earnings reports. Payne (2008) argues that firms audited through specialized auditor reduces managers incentives to manage earnings to meet or beat analysts' forecast and thus reduces analysts' short-horizon forecast accuracy. Behn et al. (2008) investigate whether audit quality is associated with analysts' earnings forecast but do not find any significant association between BIG 5 audit firms and analysts' earnings forecast accuracy. Wu and Wilson (2013) argue that prior studies using short-horizon forecast may be subject to competing potential effects of audit quality. Further, the long-horizon forecast does not induce benchmark beating incentives. So this paper will follow Wu and Wilson's (2013) proposal to use analysts' long horizon (beginning-of-year) forecasts for studying the relation between audit quality and forecast accuracy in Australian listed firms. Thus the hypothesis is stated below: H1: Analysts' forecast errors are negatively associated with audit firm industry specialization.
Studies looking at analysts' forecast accuracy have also looked at the association of audit quality and analysts' forecast dispersion (Behn et al., 2008). Studies suggest that uncertainty about future earnings is increased with lower financial reporting quality (Imhoff & Labo, 1992). They further suggest that analysts' with more reliable information about future earnings are likely to have consensus on their forecasted earnings and thus are likely to have smaller forecast dispersion. So I hypothesize that H2: Analysts' forecast dispersions are negatively associated with audit firm industry specialization.

Research Method
Following model is used to test the hypothesis: Here the dependent variable is the analysts' beginning-of the year forecast errors, which is measured from ass.ccsenet.org Asian Social Science Vol. 16, No. 3 2020 I/B/E/S summary file and I/B/E/S detail file. This comprises the first forecasts issued immediately following the prior years' earnings announcement date. To reduce noise forecasts issued in 90-day horizon is only taken. These raw errors are then deflated by a) stock price b) absolute earnings. The absolute forecast errors argued by Wu & Wilson (2013) are the mean of the absolute value of the forecast error of each analyst contributing a forecast for client firm within the window. The measures are listed in Table 1. The analysts' forecast dispersion (DISP) is defined as the standard deviation across analysts' earnings forecast deflated by the stock price.
This study uses both audit firm industry market share and audit firm portfolio share as a measure for industry specialization as Payne (2008). Instead of using the total asset as a proxy, this study uses total audit fees to calculate industry specialization. This paper assumes that audit fees would be a more accurate measure for industry specialization calculation in the Australian context. Previous studies (Payne, 2008;Behn et al. 2008;Wu & Wilson, 2013) based on US firms, where total audit fee data is not easily available from the database, use the total asset as a proxy for audit fees. However, my study is based on Australian firms, has the advantage of using actual audit fee data.
Acknowledging the appropriateness of each of the individual approach (audit firm industry market share and audit firm portfolio) under different industry distribution context. I used each method separately to measure the auditor industry specialization in my dataset. I first estimate a continuous measure of auditor industry specialization under both methods: INDSP(MKT)_cont = the ratio of audit fee paid to a firm in a specific industry relative to audit fee paid by all companies in that industry INDSP(POWER)_cont = The ratio of the total audit fee of the clients that audit firm services in a particular industry divided by the total audit fee of all clients of that audit firm.
Following Payne (2008), I also employ a dichotomous measure of specialization, which defines an audit firm as a specialist in a particular industry as follows: For the first hypothesis, test analysts' forecast dispersion (DISP) is included as a control variable in Equation (1) following Wu & Wilson (2013) to control for the impact of analysts' broader information environment on their forecast accuracy. As forecast dispersions are found to be positively related in forecast errors (Zhang, 2006) a positive coefficient is predicted. SIZE is included as a control for the variation in company size. Prior studies find both positive and negative relationship is expected between firm size and forecast (Dhaliwal et al. 2010, Collins et al. 1987) so in this paper, no sign for SIZE is predicted. Companies with more analysts following (NUMEST) are argued to increase competition among analysts and increase incentives for accuracy. Firms suffering from loss are subject to more uncertainty thus greater chance of forecast error. LOSS (indicator variable) is included to control for that. This paper also controls for absolute change in earnings (ABSCHGE), because larger earnings changes have been strongly associated with lower analysts' forecast accuracy (Dichev & Tang, 2009). Following Bhen et al. (2008) this paper also includes a natural log of earnings per share to control for the negative relationship between this measure and absolute forecast errors. An earlier study (Andrade & Kaplan, 1998) finds that the higher firms leverage, the higher is the probability of financial distress. Financial distress is proxied by the leverage (LEVERAGE), which is predicted to have a positive relationship with forecast error.
For the second hypothesis test, following Bhen et al. (2008), I included firms with longer forecast horizon (HORIZON), firm size (SIZE), and financial distress measure (LEVERAGE) as the control variable in Equation (2). The probability that earnings may have a link with analysts' forecast, I included earnings level as a control variable in this equation. I do not include any industry dummy as changes in forecast dispersion across industries are not expected.

Sample Selection
This study includes all Australian firm-years over the period of 2003 (Abarbanell and Bernard, 1992). Here I consider forecast of annual earnings made about 11 months prior to current year earnings being announced (long-horizon) and sort forecast into the 90-day window. The first forecast made within the first 90-day horizon is considered, and the mean and median of the first forecast by each institution is calculated. Table 2 describes the derivation of the observations used to test our hypothesis. I begin with an initial 61648 firm-year observations for which there are sufficient data to compute long horizon consensus forecast errors. I further restrict the sample to clients of Big 4 audit firms, reducing the sample by further 10,753 observations. As the hypotheses test requires only the first forecast made within the first 90-day long horizon, I eliminated 20038 observations for which a number of long-horizon is greater than zero. I also exclude observations for which necessary financial data or auditor information is missing, which resulted in a final sample of 1766 firm-year observations for 113 Australian firms with complete analysts forecasts and financial information.  Table 3, the material industry has the highest number of firms. Table 3 Panel B does not show any obvious concentration in the firm distribution and is more evenly distributed from 2006-2012. Table 3 Panel C provides a number of observations for each audit firm by industry. Nearly 34% of clients in the Material industry and more than 50% of clients in the Capital Good industry are audited by KPMG.  . The reason may be because Payne and Wu studies industry specialization in USA based firms, and this study is based on Australian firms. As discussed before the main analysis of this study used two measures of forecast error and from the descriptive statistics, it seems not to have any obvious outliers. However further statistics at the highest and lowest 1 percent reveals the existence of outliers in the dependent variable. To deal with that, data is winsorized at p1 and p99 for running regression for our hypothesis test.
The mean forecast dispersion (DISP) is about 0.019, which suggests that the average mean dispersion is about 1.99 percent of the stock price. This value is close to the study performed by Behn et al. (2.1 percent). The difference between the mean and median value of firm size (SIZE) provides evidence of some large firms in the sample driving the sample mean skewed to the left. To deal with this firm size is winsorized at percentile 1 and percentile 99 for the hypothesis test. On an average six analysts provided a forecast for the sample companies while 18.3 percent of the companies reported a loss. The average number of firms reporting loss is higher than those reported by Payne (14.4 percent), as Australian firms from Energy and Material sector has higher reported loss than firms in the US. Table 4 Panel B reports the t-test results to detect any systematic characteristics between the clients of industry specialist auditors and non-industry specialist auditors. Results show that industry specialist auditor 3 does not possess any significantly different forecast error and forecast dispersion. However, clients audited by industry specialist auditors tend to be larger firms (SIZE), with higher earnings levels (EL) and a significantly higher number of analysts following (NUMEST). Also, these firms have weak evidence (p= 0.02) of having lower leverage than non-industry specialist clients. One interesting finding is clients reporting loss (LOSS) are on average more audited by industry specialists (21 percent vs. 16 percent) auditor. This finding for firms reporting loss is consistent with Behn et al. (2008) study on the US, who also find that firms reporting frequent losses are more likely to hire industry specialized auditors.
The correlation between variables is reported in Table 5. The correlation matrix indicates that absolute forecast error (LHMEDIANFEABSDET) measure is negatively correlated (weakly) with industry specialization (measured in market share approach). Signed-forecast error (LHMEDIANFEPDET) is also negatively correlated with industry specialization (measured in the portfolio approach). However, signed forecast error and industry specialization measured in the market share approach shows a significantly positive correlation. DISP is negatively correlated with signed forecast error but positively correlated with absolute forecast error. Again, DISP does not show any significant association with industry specialization measured in both approaches. Except for HORIZON, all control variables have a significant but very low level of correlation with the test variable and the independent variable. The correlations among the control variables are generally very low and insignificant in most cases. These relationships with DISP, HORIZON, LEVERAGE, and NUMEST are close to expected and may not affect the estimation of regression of the variable in interest. Overall, multicollinearity should not be seriously affecting the estimation of the regression parameters. *Significance level at 5%; **1= LHMEDIANFEPDET, 2= LHMEDIANFEABSDET; ***Definition of the variables is given in report.**** Due to outliers described in the report winsorized data is used for Forecast error measurement

Results
This section reports the results of hypotheses tests of the overall relationship between industry specialization and forecast accuracy. Table 6 Panel A presents the OLS regression results of Equation (1) for both measurements of forecast error (absolute forecast error and signed forecast error) and auditor industry specialization (portfolio-share approach and market share approach). I report the estimated coefficients at the two-tailed significance level. For succinctness, I do not report the parameter result for YEAR and SIC indicator variable of the related test -statistic for the equation (1) variable. My objective is to test H1, which predicts that auditor specialization (INDSP) is negatively associated with absolute forecast error. Table 6 presents the results from Equation (1) for the two measures of forecast error (absolute forecast error and signed forecast error). Inspection of Table 6 Panel A reveals that the R-square of all the regression results between 13 to 37 percent, which seems to be a reasonably well-fitted model. This is more than that reported by Payne (19.57%), probably because of the different country contexts, differences in the measurement of industry specialization and differences in the measurement of our control variables. Coefficients for my first hypothesis test do not show any significant association between auditor industry specializations and forecast error both using market share and portfolio approach. However, opposite to my prediction control variable forecast dispersion (DISP) shows a significantly negative association with forecast error, means, firms with higher analysts forecast error is associated with lower forecast dispersion among analysts. One possible explanation may be, analysts are consistent with predicting future earnings, and they possess similar traits in terms of difference with the actual earnings. HORIZON, earnings level (EL) and LEVERAGE also do not report any significant association with the test variable. Similar to prediction, firms reporting loss (LOSS) shows a negative association with signed forecast error and significantly positive association with absolute forecast error. A number of analysts following (NUMEST) as predicted show a negative association with absolute forecast error in INDSP measured under the market-share approach. The change in earnings (ABSECHG) also shows a predictable positive association with signed-forecast error but a negative association with absolute forecast error. Panel B reports the client-firm fixed effect regression results. The coefficients for industry specialization under portfolio and market share approach are similar to those of normal OLS regression. Industry specialization does not show any significant association with absolute and/or signed forecast error. Coefficients for the control variables (DISP, HORIZON, EL, LOSS, and NUMEST) also report results similar to the normal OLS results reported in Panel A. However, firm size shows a significantly negative association with forecast error, which means larger firms, has lower forecast errors.
Panel C reports the coefficients for propensity score-matched samples to control for the inherent endogeneity problem of selecting BIG 4 firms. The tabulated coefficients for absolute forecast error and signed forecast error still show insignificant results for the main effect of INDSP. The control variables also report results consistent with our normal and fixed-effect OLS regression reported earlier.
One possible reason for this non-significant association with forecast error and industry socialization may be other variables co-effecting the dependent variable, such as analyst expertise and auditor partner expertise (Wu & Wilson, 2013). Hypothesis 2 predicts that the auditor industry specialization is negatively associated with dispersion in analysts' forecast. Table 7 Panel A, B, and C reports the OLS regression coefficients under the unrestricted sample, firm-fixed effect and propensity score-matched sample, respectively. In all unrestricted (Panel A), firm-fixed (Panel B), and propensity score-matched sample (Panel C), our test variable (INDSP) is positive and weakly significant when industry specialization is measured using portfolio approach, which is opposite to my predicted direction. All the control variables show insignificant results except earnings level (EL) and HORIZON. These results are not consistent with Behn et al. who use DISP as a part of forecast accuracy. The probable reason for this discrepancy may be the use of a 90-day long horizon, which is not consistent with what is used in Behn et al. (2008). These coefficients may be interpreted in a different way to make more sense regarding analysts' forecast.

Effect of IFRS
Australia adopted IFRS from January 2005 affecting several key account treatment of financial reporting. According to Ernst & Young (2005), the changes in financial statements toss challenges to the firm as well as the investors. Cotter et al. (2012) find that analysts forecast accuracy improves with the transition to IFRS, implying a positive association of IFRS adoption and reliability of financial report to the users. To test whether the impact of regulatory responses in part of the auditor has any differentiated effect on firms with industry specialist and non-industry specialist, I partition my sample into two periods: transition to IFRS (2003-2004) and post-IFRS (2005. Table 8 reports the regression results for forecast error. The regression results yield substantially lower R-square for the post-IFRS period. This may be a result of the non-inclusion of additional variables to control for the post-IFRS period. The association between INDSP (portfolio method) and absolute forecast error shows a significant positive association. However, INDSP (market share method) and absolute forecast error pre and post-IFRS show a significant negative association as predicted in the main hypothesis (Equation 1). This additional test may imply that my initial model should account for an additional variable to control for the effect of IFRS on forecast accuracy. DISP shows a significant positive association with forecast error in the post-IFRS period. LOSS firm shows a significant positive association with forecast error as my main test. LEVERAGE shows a significant negative association with forecast error. This additional test result suggests that the main results should be interpreted with caution and additional variables may be included to control for the regulatory change affecting the Australian firms. insignificant thus providing no evidence of association pre or post IFRS adoption in forecast dispersion.

Samples Defined by the Audit Firm
This section reports the regression coefficients run within samples defined by the audit firms. Add test to run regression within samples defined by audit firm (Deloitte, Ernst & Young, KPMG, and PricewaterhouseCoopers). Table 10 reports the regression coefficients for INDSP (portfolio share method) for each audit firm. Contrary to the main result, further sub-division with auditors shows that industry specialization has a positive and significant association with forecast error for Ernst & Young. Also opposite to the main result LOSS firm shows a significant negative association with forecast error for KPMG and PricewaterhouseCooper. Also, a change in earnings (ABSECHG) also shows a significant and negative association with forecast error.

Large vs Small Firm
Industry specialization can vary between large and small firms. In this section, I divide the sample into large and small firms and run regression separately for each. I define large firms as firms for which total asset is greater than the median value of the total asset and small firm as firms with total asset lower than the median value of the total asset. Table 10 reports the result of the first hypothesis. Similar to the main hypothesis, industry specialization does not show evidence of association with forecast error. For large firms forecast dispersion (DISP) and LOSS show negative and significant association. However, small firms reporting LOSS have a positive and significant association with forecast error. Moreover, association with ABSECHG and forecast error reports different significant directions. As large firms and small firms are showing different results for the control variable, I suppose some control variables may be inserted into the model to deal with that.

Conclusion
This working paper examined whether analysts' forecast error is mitigated when the auditor has industry specialization. The unrestricted OLS regression, fixed effect and OLS regression on propensity score-matched sample results suggest no significant association between analysts' forecast error and industry specialization. In addition, this study does not find any significant association of dispersion in analyst forecast and industry specialization. Primary conclusions drawn from the findings indicate that industry-specialization may not mitigate analysts' forecast error under the Australian context. There may be other determinants of forecast error that needs to be controlled for before proceeding to a generalized conclusion.
Following prior research, this study assumes that firms' Zscore (Payne, 2008;Wu & Wilson, 2013) could be a better proxy for financial distress than Leverage used in this paper. This working paper also fails to control for firms' financial performance measured by return on equity or return on asset which may result in noise in the model. This paper only examines the effect of industry specialization on analyst forecast error but there are other confounding variables that may affect the test variable as well as the independent variable, such as partner firm expertise, number of analysts following a firm and analysts' expertise as found by other researchers. There can also be an endogenous problem in equation (1) on the grounds the control variables can be correlated with each other. To control for this type of problem a propensity score matching approach can be preferred.
Further research can take a different approach to measure auditor industry-specialization applicable under the Australian context as Australian firms do not have an even distribution of firms in a particular industry. In addition, further studies can take different methodologies to control for endogeneity problem.