The Fiduciary Test for AI in Proxy Voting: A Conversation on Applied AI, Standards, and Responsibility

April 27, 2026
/
3
 min read
Subscribe
Kaveh Beigi
Vice-President, Digital and Content Strategy
Arik Brutian
Senior Vice-President, Artificial Intelligence & Data

Contents

Key Takeaways

This discussion serves as a companion to AI and the Fiduciary Test, the first in a series of white papers on enterprise-level, investment-grade AI in proxy voting. The paper makes the case that as AI capabilities become widely available across the governance industry, the question for institutional investors is no longer whether to use AI in governance workflows. It is what standard governs the AI as applied to fiduciary responsibilities, and whether the systems being offered are built to meet that standard.

To go deeper into what that standard looks like in practice, I sat down with Arik Brutian, Glass Lewis’ Senior Vice President of Artificial Intelligence and Data.

How We Got Here, and What “AI in Proxy Voting” Actually Means

Kaveh Beigi: You’ve been working in applied AI since 2014, well before the current wave. What does that vantage point tell you about how the technology is being used today?

Arik Brutian: I started building machine learning solutions and teams in 2014. We started from using support vector machines for ESG data classification. From there the field moved through random forests, knowledge graphs, deep learning, transformers, and eventually the generative AI moment in late 2022.

What that arc conveys is that AI has matured from a skunkworks technology into an enterprise capability that can be applied to almost any field, including proxy voting. The question is no longer, “can we make this work?” but “how do we design, develop, or deploy it in a way that meets the standards our clients and regulators expect?”

KB: When someone says, “we use AI in proxy voting,” what do they actually mean?

AB: When people talk about AI in proxy voting today, they often mean large language models, or LLMs specifically. LLMs are powerful, and they are now being used to extract data, reason over it, and generate results at speed. But LLMs are built to be plausible and convincing, which is not the same as being right in every case.

The most valuable insights in governance are often the least predictable ones, the ones that do not follow the expected pattern. LLMs are designed to produce the most likely next piece of text based on patterns in their training data. They are pattern matchers, optimized for plausibility. There is an internal wall they hit when the right answer is not the most probable one.

At Glass Lewis, we use an ecosystem of AI technologies: LLMs where their horsepower fits, classifiers, extraction tools, along with rule-based and symbolic AI systems where determinism matters. There is no silver bullet, no single technology to handle all of the complexities of proxy voting.

KB: Almost every provider in this space now describes their approach as human-in-the-loop. What does that phrase actually mean to you, and what does it miss?

AB: You can think about AI in proxy voting along two paradigms. One is the human-in-the-loop paradigm as it is commonly practiced.1 But if you look closely, what you usually find is a human at the end of the loop. Someone reviews or approves the output of an AI system. That approach has real risks: the most significant one being automation bias, which regulators have started warning about explicitly.

A reviewer working at scale, under proxy season time pressure, gets tired. When generative AI outputs are plausible and convincing, reviewers tend to start agreeing with them.

The other paradigm, which Glass Lewis embraces, is what we call human-centric AI (HCAI). Here the human is all over the loop. Human judgment is embedded in the architecture by design, through the methodology that governs what the AI is permitted to conclude. At every stage of the production process, the AI’s work is transparent to the human, and the human can stop it, edit it, or override it. The difference between checking outputs and governing the process is fundamental, and in a fiduciary context it is the difference that matters.

Governance, Data, and the Speed Question

KB: Walk me through how that actually works. How does institutional expertise end up governing an AI system?

AB: We take our methodology, the analytical frameworks and knowledge base of our analysts and methodologists built over 20 years, and we operationalize it into a structured rule set. That rule set becomes a governing layer over the AI. It defines what each AI agent in the production process is permitted to do, what sources it is allowed to use, what reasoning paths are valid, and what the output has to look like.

For Glass Lewis’ Climate Intelligence solution, as an example, that meant our methodologists, our product team, and our research teams co-designing the system alongside our data scientists and technologists.

In practice, that means the methodologists’ thinking about how a climate disclosure should be evaluated gets operationalized into a structured rule set, and that rule set is then translated into the specific instructions the AI follows at every stage. Those instructions are detailed: they specify whether the AI should use a reasoning model or a fast model for a given task, whether it should be performing extraction or summarization, where it is permitted to look for source data, and what format and length the output has to take.

None of that originates with the AI team alone. It comes out of working sessions with the experts who actually understand what good climate analysis looks like, and that work continues as the system runs in production.

KB: The phrase “investment-grade data” comes up a lot in our white paper. What does that mean, specifically?

AB: Everything in AI starts with data. Even with the most sophisticated models, if you want a system to be better, the first place to look is the data, not the algorithm. So the question of what counts as investment-grade governance data is foundational. We define it through six properties.

  1. Accuracy: Validated against the source evidence.
  2. Consistency: Maintained through deterministic rules across different products and uses.
  3. Completeness: Verified against known coverage gaps for a given use case.
  4. Traceability: An unbroken chain from the source document down to the paragraph or table that supports the inference.
  5. Validity: The data conforms to the right formats and is normalized to what governance terms actually mean in different jurisdictions.
  6. Timeliness: The data is available within the right time window for the process that needs it.

These six criteria are interdependent. Without any of them, you have data, but you do not have data that can support a defensible fiduciary decision.

KB: There is a perception that the trade-off in AI is between speed and rigor. How do you think about that?

AB: The trade-off exists, but it is narrower than people think. AI has allowed us to build systems that are both fast and transparent. Could a system be even faster? Yes, if it were a black box. The fastest systems are pure pattern-matching systems with no oversight, no traceability, and no ability to explain how they reached a conclusion. In a context that involves fiduciary duty, that kind of system is not an option.

What we have built a Glass Lewis is faster than any manual process and rigorous enough to be defensible. It is transparent, explainable, interpretable, and regulator-ready. It is also slower than a black box would be. That is the price we have willingly agreed to pay, because in our business the rigor of our data and the soundness of our research matters more than maximum speed.

What to Ask Vendors

KB: Finally, if you were advising an asset manager evaluating AI proxy voting solutions today, what would you tell them to look for?

AB: Do not accept the human-in-the-loop label at face value. Ask the provider to describe a specific governance scenario and exactly where in the production process a human is involved, what methodology they are working from, what they are checking against, and what gets documented. The specificity of the answer tells you everything.

Ask how they define investment-grade data, and then ask them to walk you through how their data meets each property. Ask how their AI handles the markets you actually vote in, not just your home market. Ask what happens when the system encounters a situation it was not designed for: what the escalation looks like, who is accountable, and how the outcome gets documented. These are questions any fiduciary should be able to ask, and any provider should be able to answer specifically.

AI and the Fiduciary Test: A Guide for Institutional Investors in Evaluating AI Proxy Voting Solutions is available at glasslewis.com. The series will continue over the coming months, with the next paper going deeper into the governed data architecture that makes investment-grade AI in governance possible.

Subscribe

Notes and References

1 For further details, see especially Section 3: Two Approaches to AI in Proxy Voting of the paper. Brutian, A. and Beigi, K. AI and the Fiduciary Test: A Guide for Institutional Investors in Evaluating AI Proxy Voting Solutions. Glass Lewis. April 4, 2026. https://www.glasslewis.com/ai/fiduciary-test

No items found.