Investor expectations are changing rapidly, and boards will be asked to demonstrate readiness by the 2026 proxy season. That was the focus of my recent message. corporate director briefing moderator dominique shelton leipzigHe is CEO of Global Data Innovation and one of the nation's leading authorities on AI, privacy, and data governance. With more than 30 years of experience in big law, she has trained more than 50,000 professionals and advised Fortune 500 boards on responsible innovation. She was joined by roosevelt gilesa global expert in business transformation and cyber strategy, chairs the Stakeholder Impact Foundation and founded Endpoint Ventures, which invests across five continents. christine heckerta veteran technology CEO and director, and founder and CEO of Xapa, an AI platform that accelerates workforce transformation. Together, they unequivocally argued that fiduciary duty clearly applies to AI oversight, and that the era of “passive awareness” is over.
Why AI is a board-level fiduciary issue
Giles viewed AI as a systemic risk, not just an operational risk. AI has the potential to impact revenue, TSR, ROIC, and cost structures, and therefore fits squarely into the board’s mandate to protect and increase long-term shareholder value. As new technology outpaces traditional charters and bylaws, fiduciary duties will “fill the gap,” he said. This is the standard by which courts and investors determine whether the board asked the right questions and provided adequate oversight.
Shelton Leipzig noted that proxy advisors like BlackRock, Allianz, and Glass Lewis have already updated their stewardship guidelines. The ISS is right behind us. They expect boards to demonstrate AI literacy and document director training and oversight frameworks in their proxy statements by the 2026 voting season. If this is not met, directors could face withholding of recommendations, reputational damage and, if governance failures are significant, Caremark derivative claims.
TRUST Oversight Framework (What Boards Should Ask)
To help boards navigate the sprawling and overlapping regulations (EU AI law, NIST, ISO, and dozens of new state laws), Shelton Leipzig offered the following distilled rules: trust Framework – Five practical pillars for board oversight:
- Triage. Ensure the use of AI maps to corporate strategy. Identify where AI is being deployed and what laws and risk tiers apply (prohibited, high or low). Many “shadow AI” projects fall outside of corporate priorities and are prioritized to bring issues to the surface and be shut down.
- The right data to train and the right to train. Boards must ensure that training data is accurate, legally sourced, and subject to intellectual property and privacy rights. Poor data hygiene can derail your program and create liability.
- Non-disruptive testing, monitoring and auditing. Build accuracy thresholds, escalation paths, and human safeguards directly into the system. There is no need to rely on shelved policies. If you never allow your customer service reps to abuse customers, you shouldn't allow chatbots either. Code behavioral standards into AI and monitor deviations.
- supervise humans. Culture is the foundation. Train employees to recognize when AI output deviates from policy or quality and “see something, say something.” The front-line consciousness is often the first to discover flaws. Reward it.
- Technical documentation. Maintain artifacts to diagnose and correct model drift. Hallucinations are inevitable. Monitoring, detection, and remediation are key.
From compliance governance to data governance
Most boards still delegate technology oversight to audit departments. Mr Giles warned that adding AI and cyber to an already over-subscribed audit agenda risks escalating process failures and Caremark exposures. His recommendation is to create a technology or data and technology committee that centrally oversees AI, cyber, data governance, and digital transformation, and reports to the full board.
If refreshments are slow or there are skill gaps, Giles advises adding non-voting advisory directors on a 24-month rotation. This provides a rapid infusion of expertise, builds organizational knowledge, and creates a pipeline for future fiduciary directors.
Preparation is primarily determined by people, not tools
Heckert emphasized that the success of AI programs relies more on change management than algorithms. As CEO and public board member, Heckart explained that her team reviews AI applications and controls quarterly using global data innovations against the TRUST framework, coupled with company-wide training and AI “coaches” who guide employees on responsible usage.
Speakers cited published research showing that:
- Most AI failures are people or process failuresit is not a technical defect.
- company overinvest in tools and Lack of investment in trainingresulting in significant productivity and risk reduction benefits.
- Organizations starting with Collaboration between humans and AI You will see stronger performance improvements (compared to the pure replacement).
Heckert's punchline is to treat all employees as managers of the AI. Thanks to generation tools built into productivity suites, even early-career staff are now supervising “digital interns.” This requires training in judgment, such as situation setting, quality checks, and escalation, which goes far beyond improving technical skills.
Activists, index funds, and the new scorecard
Activist investors are expected to mine disclosures, interact with customers and suppliers, and benchmark board performance. When spending on AI is high and returns are ambiguous, or when risk management appears to be outperforming, boards are challenged with strategy, skill, and speed. Activists work with large index funds and proxy voting advisors. The tone and content of the board's response is important. The questions they ask reflect the pillars of TRUST and fundamental capital discipline. Where is the ROI? Where is the risk control? Who on the board actually understands this?
5 Questions Directors Can Ask This Quarter
- Triage: Which AI initiatives directly drive our top strategic priorities, and which are outside of our mission and risk appetite?
- Correct data: Are we confident that our training data is accurate, ethically sourced, and covered by appropriate intellectual property, privacy, and business rights?
- Uninterrupted monitoring: How can you test and audit model accuracy, bias, and drift, and how quickly can you detect and correct errors?
- director: If AI output deviates from policy or ethics, is there a clear escalation path and who is responsible for intervening?
- Technical documentation: Can you create artifacts that prove oversight, monitoring, and remediation occurred when an issue occurred?
Required: Use guardrails to move quickly
Over time, certain themes recurred. That said, the bigger risk for incumbents is moving too slowly. AI is reshaping cost curves, customer experiences, and business models. Boards must govern responsibly, while experimenting boldly, with visible board literacy, actionable frameworks, appropriate committee structures, and people-first readiness to turn pilots into performance.
When done well, AI becomes part of a company's operating system, aligning with strategy, measuring outcomes, monitoring risks, and powered by talented employees. It's not just governance. That's the advantage of durability.
