Your Board Can't Imagine the Future

Real data from a session I ran last week with the board and senior leadership of a $10 billion company:

Do leaders publicly share how they use AI? 90% said no.

Does the company measure AI proficiency? 100% said no.

Do you personally have a clearly defined standard of proficiency for your own role? Every single person said no.

A room full of senior leaders—people responsible for setting strategy, approving multimillion-dollar investments, holding the CEO accountable—and not one of them could articulate what "good" looks like for AI in their own work.

This is the problem.

"My Old Time Thinking"

We opened the session with what we call a suspension of disbelief demo—showing in real-time what AI can actually do. No slides. No theory. Just live, meaningful work product created before their eyes: nuanced analysis of their own strategic questions, deep research reports, presentation decks, even an AI-generated podcast summarizing their competitive landscape. All built in minutes, not months.

About sixty minutes in, one of the board members interrupted.

"This is overwhelming for me," he said. "I would love to understand this better. With my 'old time thinking,' the demonstration you just gave us... it's just overwhelming."

He paused. "You use tools as I use my tool set for mechanics."

What he was saying: You're fluent in something I've never touched. You reach for these tools the way I reach for a wrench—instinctively, without thinking. But I don't even know where to start.

This wasn't a junior employee. This was a senior leader responsible for governance of a $10 billion enterprise—more than ten million customers, nearly 2,000 locations. And he was being honest about where he stood: his current mental model—his "old time thinking"—had no frame of reference for what he'd just witnessed.

That's where most boards are. They've seen reports. They've read decks. They've delegated to committees. But they haven't done the work. And until they do, they're governing with a mental model built for a world that no longer exists.

Which raises a question: when leadership's imagination is constrained, who's supposed to push back?

The Imagination Ceiling

Boards are supposed to fix this. They're supposed to push enterprises toward startup-level ambition. To hold CEOs accountable to bold visions. To ask, "Why aren't we moving faster? Why aren't we thinking bigger?"

But accountability requires imagination. And imagination is constrained by experience.

Another board member in last week's session named the dynamic perfectly.

"If you don't give limited resources," he said, "and everybody can have anything without automating anything all the time, the force against implementation is too big. You have enough time and assistance to do everything manually anyway, so what's the rush?"

This is the trap. If goals are achievable without AI, there's no forcing function. People will default to the old way—not because they're lazy, but because the old way works well enough.

I shared Simon Sinek's observation with the room: the difference between a startup and an enterprise is that a startup's ambitions exceed its grasp, while an enterprise's ambitions lie within its reach.

You could feel it land.

Ethan Mollick, the Wharton professor who studies AI adoption, put it bluntly on my podcast, Beyond the Prompt: "I talk to CEOs and high-level executives all the time, and none of them are using AI themselves. They've all delegated to a committee that will be reporting back in three months, which will then start a process of looking for an RFP to hire a consultant who will do an initial analysis."

If that's true for CEOs, imagine what's true for their boards.

McKinsey just released data that validates this. Only about 6 percent of organizations are seeing meaningful bottom-line impact from AI. What separates them from the rest of the pack? Ambition. High performers are three times more likely than others to say their organization intends to use AI for transformative change—not just efficiency gains. The organizations chasing cost reduction are getting cost reduction. The organizations chasing transformation are getting growth, innovation, competitive differentiation, and market share.

In the age of AI, if your goals are achievable without augmentation, they're too small. And boards that lack the visceral experience to know what's possible will keep approving goals that are too small. They'll sign off on overly conservative timelines. They'll accept "a year and a million dollars" because they can't imagine it any other way.

What It Looks Like When Someone Knows

I wrote about Melissa Cheals, CEO of Smartly in New Zealand, a couple of weeks ago. After going through an AI building workshop, she rejected a million-dollar, year-long proposal from her head of engineering. "Pivot time," she told him. "We're not building that way anymore." Two weeks. Hackathons. A completely different approach.

She could do that because she knew. She'd done the work herself.

Now imagine her board. If they hadn't had similar exposure, they would have approved that million-dollar proposal. They would have nodded along at a year-long timeline. They wouldn't have known to push back.

The CEO caught it. But what if she hadn't? Who's the backstop?

The Questions Boards Should Be Asking

Fiduciary duty now includes imagination.

It's not just: "Is our AI strategy sound?"

It's: "Show me how you're personally using AI for the most important decisions—not the least."

It's: "When was the last time you were surprised by what's possible?"

It's: "What ideas do we have that aren't quite possible with LLM capabilities, which we're actively testing with each new model release?"

It's: "Are we setting goals that are impossible to achieve without AI augmentation? Or are we setting goals that 'mere humans' could hit?"

And most importantly, it's asking these questions with the visceral experience that can distinguish a real answer from a performance. If you haven't done the work yourself, you can't tell the difference. You'll accept AI theater. You'll approve conservative timelines. You'll rubber-stamp strategies built for yesterday's constraints.

What Changes When You Get the Experience

Here's what surprised me about last week's session.

By the end of the four hours, the same board member who had been overwhelmed—admitting "my old time thinking"—was articulating a completely different vision.

"No information should get to me," he said, "that wasn't reviewed by an AI."

And he wasn't just saying it. He was asking for help implementing it. That day.

Four hours earlier, he didn't know what he didn't know. Now he was redesigning how he works.

That's what happens when you give leaders the visceral experience of what's actually possible. They ratchet up their expectations.

The Invitation

Here's what I'd ask any board member reading this:

When was the last time you personally worked with AI on something that mattered to you? Not delegated. Not watched a demo. Actually worked with it.

If the answer is "never"—or even "a few months ago"—you've got a problem. The capabilities have changed. Your mental model is stale. And you're making governance decisions based on an outdated map.

Fiduciary duty used to mean financial oversight. Then it expanded to include strategy, risk, culture.

Now it includes imagination.

Can you imagine the future your organization is building toward? Can you recognize when leadership is thinking too small? Can you tell the difference between an ambitious timeline and an obsolete one?

If not, you're not qualified to oversee those who can.

That's not an insult. It's an invitation. And as of last week, I've seen what happens when boards accept it.

Related: Stop Being a Hypocrite
Related: Malpractice
Related: Beyond the Prompt: Ethan Mollick

Join over 29,147 creators & leaders who read Methods of the Masters each week.

P.S. If this piece hit home and you're wondering where to start, my AI Bootcamp is the fastest way to get the visceral experience I'm describing. Fifteen minutes a day for three weeks. Hands-on keyboard. No prior AI fluency required.

Next
Next

You're Not Tired of AI. You're Scared of It.