US AI Oversight Through Three Lenses: Investor Expectations, the S&P 100 and Company-Specific Analysis
Subscribe
Key Takeaways
- U.S. investors increasingly expect board-level oversight and disclosure of AI governance, with most favoring formalized oversight structures and transparent reporting.
- Among S&P 100 companies, just over half disclose board-level AI oversight and fewer than one-third disclose both oversight and a formal AI policy, which may reflect uneven governance practices amid limited regulatory guidance.
- Company approaches to AI governance can vary significantly, as shown by the differing approaches from Meta, Citigroup and Lockheed Martin.
- AI-related risks, including bias, copyright infringement, cybersecurity threats, fraud, and reputational harm, are increasing in frequency and materiality, prompting heightened investor scrutiny and shareholder proposals.
- In the absence of comprehensive regulatory guardrails, evolving SEC recommendations and shareholder expectations are likely to drive more robust AI governance frameworks and enhanced disclosure practices in upcoming proxy seasons.
In recent years, technology powered by artificial intelligence (AI) has rapidly proliferated the U.S. market, with companies increasingly integrating AI into their operations, products and services. While AI technology presents significant opportunities in many areas, its widespread implementation is accompanied by an increasing number of related material risks, underscoring the importance of AI governance practices. As such, investors expect boards to ensure they have the tools and frameworks necessary to maintain effective oversight and risk management. As with other emerging risks, companies overseen by boards that promote accountability, maintain director education programs, and include directors with relevant expertise, are better positioned to provide effective oversight and avoid the potential pitfalls of AI integration.
This article looks at U.S. investor expectations on AI issues and how companies are responding, including data on the S&P 100, and a deep dive into the approaches taken by three companies with varying levels of AI integration and exposure.
Investor Expectations on Board AI Oversight
In light of this emerging issue, Glass Lewis posed several questions relating to AI oversight and considerations in our 2024 and 2025 policy surveys. According to the results from our policy surveys:
- 67% of U.S. investors evaluate AI issues on a case-by-case basis, while just 29% of U.S. investors do not have any benchmarks or related voting policies for AI issues.1
- 65% of U.S. investors believe all companies should provide clear disclosure of the board’s oversight of AI governance issues and AI ethics.2
- 46% of U.S. investors responded that the entire board, one of the board’s committees, or a standalone committee should be tasked with AI oversight.3
- 49% of U.S investors further stated that board oversight of AI governance should be codified in a committee charter or relevant governing documents.4
These results reflect the prevalence of AI as an area of concern for investors. And while the approach to addressing AI issues is still nascent, the majority of investors look for some level of disclosure and board oversight pertaining to this area. This concern is further highlighted by the number of AI-related shareholder proposals submitted during the 2025 proxy season, with nine of the 29 technology-related proposals explicitly dealing with companies’ use of AI (compared to nine of 36 technology-related proposals in 2024). These proposals took a variety of forms, illustrating the range of shareholder concerns regarding some of the unique AI-related risks faced by companies.
S&P 100 Company Disclosures on AI
To understand the range of disclosures and approaches to board oversight that companies have adopted, we reviewed the 2025 proxy statements of S&P 100 companies. Notably, 54% provided disclosure of board-level oversight of AI, overseen by either the full board or a committee (Figure 1). Of those companies, 63% designated such oversight to a specific committee (most commonly audit or technology committees) while 37% designated full-board oversight.
Figure 1. AI-Related Disclosures in Proxy Statement by S&P 100 Companies, 2025
.png)
Source: Glass Lewis Research. Note: Data as of Jan. 1 to December 31, 2025.
In addition, 45% of S&P 100 companies maintained an AI policy,5 or a clear description of established policies or ethical considerations that govern their organization’s use of AI (Figure 2). Overall, only 28% of S&P 100 companies were found to have disclosure of both board-level oversight and an AI policy. This may be attributed to the lack of consistent regulatory guidance. While the shareholder push for board oversight of AI is evident, market expectations surrounding AI-related disclosures are still emerging, resulting in a range of company disclosures.
Figure 2. Disclosed AI Policy and Board-Level Oversight for S&P 100 Companies, 2025
.png)
Source: Glass Lewis Research. Note: Data as of Jan. 1 to December 31, 2025.
Examples of Varying Approaches: Meta, Citigroup and Lockheed Martin
In consideration of our policy survey results above, where over 65% of U.S. investors expect all companies to provide clear disclosure of the board’s oversight of AI governance issues and AI ethics, we further examined three S&P 100 companies’ 2025 proxy filings. This was done to assess how these companies were meeting these expectations and how their respective industries and level of AI integration may have impacted their approaches. Specifically, we identified if the companies:
- Maintained board-level oversight of AI and if so, what form.
- Disclosed AI-related director skills (matrix or aggregate).
- Provided for continued director education relating to AI.
- Maintained an AI policy.
- Discussed AI risks and mitigation practices in their proxy filing or Form 10-K.
- Included AI-related shareholder proposals on their 2025 ballot.
In addition to oversight structures, we looked at companies’ disclosures to understand if they identify AI as a relevant board skill and how many included directors with these skills on their board. We also looked at whether companies maintained continued director education programs, which are often beneficial to ensuring directors are sufficiently versed on AI to support management and company initiatives. As such, items 1-5 are proactive steps that the board can take to address AI oversight and governance, whereas the presence of AI-related shareholder proposals (6) may indicate that the company has not sufficiently addressed investor concerns. See Table 1 below.
Table 1. Summary Assessment of Three Selected S&P 100 Companies Proxy Filings With AI Disclosures
.png)
Source: Glass Lewis Research. Note: Data as of Jan. 1 to December 31, 2025.
Meta Platforms (META)
Meta Platforms designated board oversight of AI to its Privacy and Product Compliance Committee, stating that the committee is tasked with, “overseeing [Meta’s] product compliance, including in the areas of content governance and integrity, youth and well-being, and artificial intelligence development and implementation, and related risk exposures.”6 Meta also outlined AI-related risks in its Form 10-K,7 as well as ongoing litigation pertaining to copyright infringement after the company allegedly used copyrighted materials to train certain AI models. Notably, several AI-related shareholder proposals made it onto Meta’s 2025 ballot requesting reports that range from AI data usage oversight to the risks of deepfakes in online child exploitation.
Outside the discussion of board-level oversight and the board’s response to the shareholder proposals, Meta provided little additional disclosure in its proxy statement, opting not to provide disclosure of director skills or continued director education. While Meta provided disclosure of board oversight of AI and AI-related risks, to some, this level of disclosure may be viewed as insufficient, given the magnitude of Meta’s AI integration across its platforms and services. This concern is further amplified by the shareholder proposals included on the company’s ballot, several of which received high levels of unaffiliated shareholder support (masked by Meta’s dual-class structure), indicating that shareholders may be looking for the company to provide additional disclosure addressing these AI-related concerns.
Citigroup (C)
According to Citigroup’s 2025 proxy, the company’s Technology Committee, “reviews trends that may affect the Company’s strategy, including… Artificial Intelligence and Machine Learning.”8 In addition, the committee oversees and reviews, “information from management regarding [Citigroup’s] approach to, and policies, practices and standards related to, Generative Artificial Intelligence.”9
While the company did not identify directors with AI-specific expertise, Citigroup did state that it maintains a robust director education program through which directors receive training on various subjects including AI. Citigroup further included a discussion in the risk section of its 10-K filing outlining its use and mitigation of AI-related technologies and risk, notably addressing AI technologies’ role resulting in, “increased risk of fraud, including identify theft and bypassing of verification controls,” which may result in the, “misappropriation of funds, unauthorized transactions, exposure of sensitive client or Company information, reputational harm and increased litigation and regulatory risk,” many of which specifically impact financial sector companies.10
Of the three examples, while Citigroup likely has less integration of AI within its operations and services, AI presents a material threat to companies in the financial sector, as discussed above. As a result, the company has focused its more robust AI disclosures on addressing its highest impacted areas, in this case ensuring effective oversight through its director education program, and its risk mitigation frameworks.
Lockheed Martin (LMT)
Lockheed Martin’s 2025 proxy statement provided clear disclosure of board oversight of AI, stating that while, “the full board maintains primary oversight of the Company’s governance of AI and related risks…” different facets of AI-related oversight are designated to three different committees, including the audit, classified business and security (CBS), and governance committees.11 Specifically, the audit committee engaged with, “senior leadership and subject matter experts … [on] the use of AI in finance, accounting and auditing applications,” the CBS committee, “regularly assesses AI matters in the context of [Lockheed’s] classified programs,” and the governance committee, “oversees [Lockheed’s] 2025 Sustainability Management Plan (SMP) which includes a goal for providing AI developers with training on system engineering approaches to AI ethical principles.”12
Notably, Lockheed was the only company of the three to identify directors with AI-related expertise in its skills matrix. Furthermore, AI and cybersecurity risk were identified as investor priorities following its investor outreach, and the company’s AI ethics policy, principles and executive oversight of the responsible use of AI were clearly outlined within its proxy statement.
Lockheed, which considers themselves a leader in AI-driven advancements, provided exceptional disclosure in all the evaluated areas. Notably, for a company that is deeply impacted by AI-related technologies, the company did not have any AI-related shareholder proposals. This may indicate the effectiveness of the company’s engagement program and disclosures and their ability to preemptively mitigate shareholder concerns.
Considerations of AI-Related Risks and Some Mitigation Measures
As demonstrated by the examples above, the range of approaches with regards to oversight and disclosure of AI differs from company to company. While some companies focus their level of disclosure on the areas most impacted by AI, such as with Citigroup, others continue to lag and provide disclosure disproportionate to their risk exposure. Many companies are choosing to “wait and see” as the risks and market expectations continue to emerge.
However, with the lack of regulatory guardrails in place to guide the responsible use of AI, the need for board oversight has become increasingly critical to encourage effective governance and implementation of AI. As with any risk factor, boards will need to ensure they are adequately prepared to manage and mitigate AI-related risks. This may include continued director education, appointing new directors with AI expertise, as well as establishing processes and designating oversight.
In addition, known AI-related risks and controversies such as energy usage, integrated biases, copyright infringement, output quality, and cyberattacks, among others, continue to increase in frequency and could lead to material financial and reputational impacts. While certain sectors may experience increased risk exposure given their operations and the level of their AI integration (such as information technology and consumer discretionary), the range of continuously evolving AI-related risks warrant consideration and preemptive action across all sectors.
Examples include recent media attention and investigations into the ethical and output quality of AI chatbots at multiple companies,13 including Meta, Alphabet, and more recently X, where chatbots have reportedly provided incorrect or concerning responses to certain prompts.14 In addition, as demonstrated by the widely publicized 2023 Hollywood strikes, many people and artists have expressed concerns regarding the elevated risk of copyright infringement resulting from the use of AI products and AI technology in the film and arts industries.15 Consequently, shareholders will likely place increased scrutiny on companies’ disclosure of AI governance, ethics considerations, and risk mitigation strategies in their filings.
Balancing AI Innovation With Responsible Implementation
As companies navigate the impressive range and capabilities of AI technologies, they are faced with the challenge of balancing innovation with responsible implementation while ensuring that their boards are sufficiently equipped to address any related issues that arise. Since the emergence of AI technologies and their related risks, the landscape has continued to evolve with varying levels of guidance across governing bodies.
However, with the number of AI-related incidents on the rise,16 coupled with shareholder pressure, companies may begin to implement more robust governance frameworks to effectively address these concerns. AI’s rapid integration into companies’ processes, products and services warrants proportional disclosure and engagement efforts to best mitigate shareholder concerns and AI-related risks.
Notably, the Securities and Exchange Commission’s (SEC) Investor Advisory Committee recently recommended that issuers: (i) define “artificial intelligence” in their disclosures; (ii) disclose board oversight mechanisms for AI deployment; and (iii) report on the material effects of AI on internal operations and consumer-facing matters.17 While these recommendations provide some formal guidance, the U.S. market is likely to see a continued range of approaches and disclosures until further guidance or regulations are established. In the upcoming proxy season, investors and stakeholders will be closely monitoring the risks related to AI integration and what market best practices continue to emerge for AI governance and board oversight.
Notes and References
1 Glass Lewis. 2025 Policy Survey Results. Glass Lewis. Oct. 21, 2025. https://grow.glasslewis.com/2025-policy-survey-results
2 Glass Lewis. 2024 Policy Survey Results. Glass Lewis. Nov. 11, 2024. https://grow.glasslewis.com/2024-policy-survey-results
3 Ibid.
4 Ibid
5 Glass Lewis captures this data point in its ESG profile for Russell 3000 companies.
6 Meta Platforms. Notice of annual meeting and proxy statement 2025. U.S. Securities and Exchange Commission. Accessed Feb. 2, 2025. https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/817c2e5b-b0be-40f5-8b52-4f5a0db288c5.pdf.
7 Meta Platforms. Form 10-K. U.S. Securities and Exchange Commission. Accessed Feb. 2, 2025. https://d18rn0p25nwr6d.cloudfront.net/CIK-0001326801/a8eb8302-b52c-4db5-964f-a2d796c05f4b.pdf.
8 Citigroup. 2025 Notice of Annual Meeting and Proxy Statement. Citigroup. Accessed Feb. 2, 2025. https://www.citigroup.com/rcs/citigpa/storage/public/citi-2025-proxy-statement.pdf
9 Ibid.
10 Citigroup. Form 10-K. U.S. Securities and Exchange Commission. Accessed Feb. 2, 2025. https://www.citigroup.com/rcs/citigpa/storage/public/10K20250221.pdf
11 Lockheed Martin. 2025 Proxy Statement & Notice of Annual Meeting. Lockheed Martin Corporation. Accessed Feb. 2, 2025. https://www.lockheedmartin.com/content/dam/lockheed-martin/eo/documents/annual-reports/2025-proxy-statement.pdf.
12 Ibid.
13 Rozen, C. “Big Tech warned over AI 'delusional' outputs by US attorneys general”. Reuters. Dec. 10, 2025. https://www.reuters.com/business/retail-consumer/microsoft-meta-google-apple-warned-over-ai-outputs-by-us-attorneys-general-2025-12-10/
14 Sandle, P. and Tabahriti, S. “UK investigates Musk's X over Grok deepfake concerns.” Reuters. January 12, 2026. https://www.reuters.com/business/media-telecom/uk-regulator-launches-investigation-into-x-over-grok-sexualised-imagery-2026-01-12/
15 Chmielewski, D. and Richwine, L. “’Plagiarism machines’: Hollywood writers and studios battle over the future of AI.” Reuters. May 3, 2023. https://www.reuters.com/technology/plagiarism-machines-hollywood-writers-studios-battle-over-future-ai-2023-05-03/
16 As reported by the Stanford Institute for Human-Centered Artificial Intelligence. Institute for Human-Centered Artificial Intelligence. Artificial Intelligence Index Report 2024. Stanford University. Accessed Feb. 2, 2026. https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf.
17 U.S. Securities and Exchange Commission. Recommendation of the SEC Investor Advisory Committee Regarding the Disclosure of Artificial Intelligence’s Impact on Operations. Accessed Feb. 2, 2026. https://www.sec.gov/files/approved-artificial-intelligence-disclosure-recommendation-120425.pdf

.png)


