AI AND PROXY VOTING GUIDE · APRIL 2026
AI and the Fiduciary Test
A Guide for Institutional Investors in Evaluating AI Proxy Voting Solutions
As AI becomes more embedded in proxy voting, human oversight has become an expected baseline. At Glass Lewis, the focus is not just on oversight, but on the standards that govern how AI is applied to fiduciary responsibilities. This paper outlines an evaluation framework that can be used to determine if systems meet that standard.

THE EVALUATION FRAMEWORK
Five questions every asset manager should ask
Each question targets a layer of the AI governance stack that a marketing brief will not reveal.
01
What governs the underlying data?
The architecture, schemas, and normalization rules behind any AI output.
02
What is the human actually doing?
Where expertise enters the production process, and what it governs.
03
How is investment-grade defined?
The specific standards any data claim should be tested against.
04
What is the AI's designed scope?
The markets, regimes, and scenarios the system was built to handle.
05
What is the accountability structure for exceptions?
The escalation pathway when a situation exceeds the AI's designed scope.
Authors
THOUGHT LEADERSHIP SERIES
AI and the Fiduciary Test
An ongoing thought leadership series on the standards that should govern AI in proxy voting and stewardship.