As part of Glass Lewis’ partnership with Diligent, in 2020 we rolled out a new peer group methodology for our pay-for-performance model.

Peer groups are notoriously difficult both to construct and evaluate. Set the criteria too strictly, and you’ll end up with a tiny group of companies. But if you cast a wide net, the resulting sample may be too inclusive to provide meaningful comparisons.

Our methodology is intended to break the feedback loops that result from a disproportionate focus on companies’ self-selected peer groups without losing the benefits of that methodology. The numbers for the past nine months demonstrate that the new methodology was roundly successful. Even so, we are excited to announce minor improvements on a strong result to incrementally improve our research product.

You can download a copy of our Perfect Peer Group white paper here:

New call-to-action

 

Background: A New Who’s Who    

Our journey from a ‘peers of peers’ approach to a new methodology started with engagement. Through more than 3,000 meetings with companies and countless discussions with investor clients, we gained a deep understanding of investor and issuer sentiments on peer groups.

  • We found that investors tend to favor industry-based peers, followed by country-based peers.
  • Public companies tend to prefer their self-disclosed peers, stemming from the unique position they feel they hold in the marketplace.

After listening to investors and issuers, in January 2020 we developed a new peer group methodology based on a “proven peer” approach. Under this new methodology, we begin with the company’s self-disclosed peers and run multiple tests against the independent views of other companies, investors and fundamental analysis, before ranking peers based on proven consensus across these views. While the quantitative aspects of the model remain largely similar, the new peer group methodology reflects a significant evolution in our pay-for-performance analysis.

Continual Improvement

Following the 2020 season, we determined to review and update our revised approach in order to make the methodology even more robust and flexible. The changes were driven primarily by a handful of results ( < 0.5% of coverage) where our analysts determined that peer group was simply not a good fit. These occurred almost exclusively for the smallest end of the size spectrum.

Our goal was a model that doesn’t just fit the largest S&P 500 companies, but also the smallest firms in our coverage. Through focusing the edge cases that did not fit the frame, we managed to identify the key factors limiting potential peers for small firms.

Though the post-season 2020 changes were not drastic compared to fiscal 2019 data, the results highlight the impact of using additional size-related screens, and illustrate the potential concerns of “group think” and confirmation bias when using only a peers-of-peers methodology.

Going Forward

Overall, the first proxy season using this approach suggests our proven peer methodology worked as intended, by eliminating outsized and inappropriate peers, reducing the group think that results from overlapping with company self-disclosed peers, and producing a normal bell-shaped distribution for pay-for-performance grades that reduces the skew towards positive grades.

There were a few outlier cases, mainly reflecting the high variability of peer group quality in the small and microcap spaces. In reviewing these outliers, we have found opportunities for additional refinements to our approach. The results also highlight our broader approach, whereby our analysts apply their discretion and evaluate pay based both on quantitative and qualitative assessments. This is evident by companies receiving “D” and “F” grades but getting favorable recommendations from Glass Lewis 71.9% and 40.2% of the time, respectively.

Peer groups are a central component of how executive pay is determined and assessed. Glass Lewis’ new proven peer methodology successfully builds on the prior standard of self-selected peer groups to incorporate not only the company’s industry, but also its size and complexity. This hybrid approach reflects both the reality of how companies choose their peers, and investor preferences for industry and country-based comparisons, providing a higher level of confidence in the integrity and independence of our peer assessment and pay analysis. We will continue to review and revise our approach, including minor refinements related to the weighting of country peers.