Part 3.2: Quality assurance

EC research brief

The study should analyse the process providers go through to ensure the accuracy of the data they collect and/or estimate, and the procedures for dealing with any data inconsistency (like in-house estimation models, when reported data is not available).

The study should also analyse the process providers use to ensure the quality of the assessment (like use of certification, audit or reviews by third parties).

The study will also analyse pros and cons of enhancing the quality assurance, as well as whether enhancing data quality would be required to enable informed investor decisions.

What process does each research provider go through to ensure the accuracy of the data that they collect?  Is the QA process fit for purpose? How do sustainable investment data, ratings and research providers check the quality of the output of estimation models that they use?

  • EC interest: "The study should analyse the process that providers go through to ensure the accuracy of the data they collect and/or estimate” and "the study should also analyse the process providers use to ensure the quality of the assessment (like use of certification, audit or reviews by third parties)” and "The study should analyse the procedures for dealing with any data inconsistency (like in-house estimation models, when reported data is not available).”
  • Your view: Contribute information, ideas & your opinions: via this structured survey (most efficient) | This email address is being protected from spambots. You need JavaScript enabled to view it. (still pretty good) or | by emailing your thoughts to This email address is being protected from spambots. You need JavaScript enabled to view it. or This email address is being protected from spambots. You need JavaScript enabled to view it..

Outstanding questions

We have questions for research providers on:

  • how they ensure the accuracy of the data they collect
  • what quality assurance processes they have for their ratings assessments and
  • what their process is to deal with data inconsistencies.

What we (think we) already know => Context

SustainAbility's Rate the Raters programme has previously identified a basis for assessing the quality assurance of data presented by ESG ratings providers.

Evaluation Criteria

 

 

 

 

Governance & Transparency

Disclosure of Methodology

Conflict Management

Regular Review

Stakeholder Involvement

Quality of Inputs

Information Sources

Company Engagement

Input Verification

 

Research Process

Experience and Capacity of Team

Quality Management

Sector Specificity

Basis for Rating

Outputs

Validation of Results

Accessibility

   

It also sets out “strong practice” on each of these elements:

  • Disclosure of methodology: A rater fully discloses its methodology to the public, including its selection process, information sources, criteria, areas of evaluation, scoring schemes, assumptions and rules. This information allows the user to fully understand and replicate how the rating is constructed.
  • Conflict management: Rater has formally articulated its approach to managing conflicts in a policy, guidelines or some other written document. The policy covers key aspects such as services and the independence of partners. In addition, the rater provides no services— related or unrelated to the rating— to rated companies.
  • Regular review: Rater takes a formal approach to reviewing, and updating as needed, its methodology to reflect new / improved information and context.
  • The approach explicitly takes into account stakeholder feedback. The rater publicly discloses these changes and engages companies to explain the modifications. Changes are announced well in advance of when they take effect.
  • Stakeholder involvement: The rater has convened an external advisory panel and / or systematically engages external stakeholders in the development and ongoing maintenance of the rating. The rater is transparent about the nature and outcomes of this convening and engagement.
  • Information sources: Rater gathers information from sources which are current, consistent, credible and diverse. The rater goes beyond company-submitted and public information, for example obtaining information from third-party data providers or stakeholders.
  • Company engagement: The rater takes a systematic approach to engaging companies in the ratings process, including spending equal time to gain an in-depth understanding of each company’s performance and context. Rater regularly solicits feedback from all rated companies to improve the ratings process.
  • Input verification: The rater should have a formal policy and robust process for verifying information used in its ratings. A majority of the inputs to the rating are verified.
  • Experience and capacity of team: At least 75% of a rater’s analysts have 3 or more years of experience in the industry they are covering or on related topics. The rater has a formal approach to ongoing education. The rater’s analysts cover no more than 20 companies each.
  • Quality management: The rater must have a robust and well-documented approach to ensuring quality control throughout the ratings process. The rater’s research process has been verified by a third party.
  • Sector specificity: The rating must be based predominately on sector-specific criteria and weightings, and the rater incorporates company-specific issues and context in its analysis.
  • Basis for rating: The rating’s scoring scheme is clearly defined and articulated and incorporates the broader sustainable development agenda (e.g. rewards companies that are taking action in line with what IPCC calls for on climate). The rater explicitly ties key external norms, standards or principles to its questions and scoring.
  • Validation of results: Rater has a formal, consistent and robust process for checking final results, including giving rated companies the opportunity to review the results prior to their finalization. Rater has a formal process for addressing challenges or disputes.
  • Accessibility: Rater fully discloses the details of its assessment and/or report to rated companies, and also provides a good level of detail to other stakeholders.

From our contact with research providers and companies, we know that most processes depend on research providers giving companies a 'right of reply' around data that is published about them.  There are, however, numerous inefficiencies in this process.  Some of the leading providers have appointed 'issuer relations' officers to make the process of company feedback more efficient - although opinions vary on whether this is desirable or not.

Extensive and frequent discussions that we have had with companies and research providers on this point indicate that:

  • Companies are frustrated by inaccuracies in the data presented (failures of QA processes) and they resent the amount of time that it takes to check and correct data presented about their company
  • Research providers tend to argue that giving companies the opportunity to check and correct data is an infallible QA process - as the company has 'right of reply'
  • Companies retaliate that their frustration lies not so much in checking data but more in having to add data that they have already put in the public domain (i.e. the research provider hasn't even bothered to check their CSR report)
  • Investors tend to be (marginally) embarrassed when data quality problems at their providers are pointed out to them ... but, it appears, not sufficiently inconvenienced to make it a significant criterion in their research purchasing process

This situation has not changed significantly over a period of ten years.

One particular area to note is the webcrawl-derived 'controversy' reports (which drive companies to distraction) as there appears to be limited human / analytical oversight of this process and few filters for accuracy / scale / relevance / timeliness etc.  These are largely presented as data without context.

We are aware of some providers that have submitted their assessment process to certification, audit or third-party review, but we are not aware of how standard this is, or whether it is still ongoing.

What are the pros and cons of enhancing QA?

  • EC interest: “The study will also analyse pros and cons of enhancing the quality assurance, as well as whether enhancing data quality would be required to enable informed investor decisions”
  • Your view: Contribute information, ideas & your opinions: via this structured survey (most efficient) | This email address is being protected from spambots. You need JavaScript enabled to view it. (still pretty good) or | by emailing your thoughts to This email address is being protected from spambots. You need JavaScript enabled to view it. or This email address is being protected from spambots. You need JavaScript enabled to view it..

Outstanding questions

We would like to know from asset managers and companies how accurate they feel the sustainability data used by research providers is and whether there would be material advantage in it becoming more accurate.

What we (think we) already know => Context

The 2019 ‘Rate the Raters’ report identified the ‘credibility of data sources’ and the ‘quality of methodology’ as the two most-cited contributors to rating quality.  Among all respondents, 95% regarded credibility of data sources as an important factor, and 92% regarded quality of methodology as a key factor. For corporate responders, these numbers were 94% and 93%, while for NGOs the numbers were 97% and 87%. Other factors mentioned by respondents included ‘consistency’ and ‘external validation’. 57% of respondents listed ‘improved quality/disclosure of methodology’ as their preferred change over the next five years.

In a separate survey of investors, quality of methodology was ranked highest, with focus on relevant/material issues, credibility of data sources and disclosure of methodology following closely behind.

We believe that there are three key elements of quality assurance in the ESG ratings process:

  • The verification and cleaning of data
  • The checking of estimates (including refining the estimation methodology)
  • The monitoring of ratings published to ensure that they are consistently generated according to clearly-articulated methodologies

There is an argument that research providers have outsourced part of the verification and cleaning of data to companies by asking them to check their data.  Understandably, companies object to the costs of this.  However, it also gives them an influence over the process that others may deem to be inappropriate.

The following ‘pros’ and ‘cons’ of enhancing the quality assurance seem likely and will be tested by our research.

Advantages of improved quality assurance:

  • Would save companies time (depending on how much of the QA process is cleaning and verifying corporate-sourced data)
  • Would increase credibility of ratings, both individually and as an industry
  • Would deliver better data to investors (although as we discuss elsewhere, this may not be valued)
  • May encourage more sustainability actions from companies due to higher confidence in ratings
  • Could lead to more standardization of methodologies for estimates and for ratings

Disadvantages of improved quality assurance:

  • The process of enhanced data cleaning could be very expensive
    • (unless web-crawling and natural language processing technology can be applied in a way that delivers contextual understanding as well as simple datapoints)
  • There is little merit improving quality assurance until/unless the standards against which such assurance is to be given is clearly articulated.

Importantly, we will also test whether supplier (corporate), customer (investor) or provider (ratings agency) seems inclined to pay the inevitable costs of further quality improvement.