5 AI Truths Every MBA Needs to Accelerate Their Next Promotion
A few days ago, I stepped in as a last-minute speaker at the University of Utah’s MBA Hub Day to walk a room full of future operators and business leaders through the future of work and the role of AI in modern organizations.
When I asked who was actually using tools like ChatGPT, Claude, or Gemini in their day-to-day work, three hands went up.
Three.
This is an MBA cohort that will soon be running P&Ls, teams, and product lines...or starting companies. If they’re this far behind, odds are most orgs and entrepreneurs are too.
That disconnect matters. Because while the room was silent, the rest of the world was quietly reorganizing around AI. Here are the five most important ideas I shared with them, and it doesn't matter whether you’re still in school, already sitting in the exec chair, or running your own venture.
1. AI isn’t a feature wave.
It’s the Industrial Revolution on fast-forward.
The neat party-trick framing (“write my email, make a meme”) is blinding many smart people.
A better mental model: AI is closer to another Industrial Revolution than “the next app.” The difference is time compression. Instead of unfolding over 100 years, it’s playing out over 5–10.
That’s what I mean by the “AI supercycle”:
Fire, steam, electricity gave comfort and power over centuries.
Microprocessors + the Internet accelerated comms and global scale.
Generative AI? The curve starts to look vertical.
Here’s the stat that should make every leader sit up straight:
A recent MIT-led simulation, the “Iceberg Index,” modeled what today’s generative-AI systems could replace if deployed at scale: Current AI capabilities could already automate 11.7% of the U.S. workforce using nothing more advanced than the tools we have right now. (and the tools are already far more advanced since that study 😬)
That’s the equivalent of $1.2 trillion in salaries, affecting roles in every state and across nearly every major job category.
The question for leaders is no longer “Is this real?” It’s:
Where in my business is the curve about to go vertical?
Will we be the ones riding it...or getting flattened by it?
2. Today’s AI is “narrow and dumb” … and that’s exactly why it’s dangerous to ignore.
I walked the class through the evolution: rule-based systems → classical ML → deep learning → generative AI.
And as impressive as every model is today, they are all still “narrow” AI. It doesn’t understand, have goals, or wake up worrying about your OKRs.
But here’s the trap: narrow AI + massive data + good UX = something that feels like intelligence.
Feeling “human enough” is all it needs to start reshaping workflows.
If you sit in a room and say, “We’ll wait until it becomes truly agentic before we move,” you’ve already lost. Narrow tools are quietly:
Saving lives in radiology diagnostics and improving point of care
Scaling fraud detection to flag billions in suspicious transactions in real time
Powering hundreds of billions in e-commerce revenues via personalization
Making millions of knowledge workers 10x versions of themselves
This isn’t about near-future science fiction. It’s about unit economics.
3. Your brain is wired to underestimate AI (and overestimate its flaws).
I saw that bias play out in real time when I asked the class about their lack of AI usage. The room didn’t signal apathy or skepticism - it outlined a perception gap. Individuals underestimate AI because it feels abstract or fragile, while the broader economy is already adopting it at a pace most people don’t see.
It's called negativity bias, and it's hardwired into all of us. If it is new, confusing, or strange, our brains often label it as threatening and overemphasize every potential shortcoming.
I'd be lying if I weren't shocked by how few of the students were leveraging these tools. While individuals are hesitating, the economy is absorbing AI at full speed.
Research from the 2024 Microsoft / LinkedIn “Work Trend Index” surveyed 31,000 knowledge workers across 31 countries, and found that 75 percent say they are already using generative AI at work.
That contrast matters: AI is spreading fast globally, yet only a small fraction of people are treating it as a strategic skill rather than a background tool. Even in an MBA program.
The right question isn’t “Is AI perfect?” The right question is “Is it meaningfully better than the human-only baseline for this task?”
If the answer is yes, the right move is to:
Wrap it in process (human + AI)
Define guardrails
Choose where it can lead so you don’t get blindsided
4. If you only do one thing: climb the AI pyramid in your own role.
I walked the class through a simple 3-tiered pyramid of where to start:
Base: things we don’t like doing
→ Note-taking, inbox triage, basic research, meeting summaries - all low risk, high ROI
Middle: things humans do poorly
→ Pattern detection across thousands of data rows, anomaly detection, auditing compliance - AI excels at scale + accuracy
Top: things that change the business
→ Personalized sales outreach, dynamic packaging/pricing, proactive customer engagement
For the MBA room, I translated that pyramid into a prompt:
Within my current or next role:
What do I hate doing?
What am I slow or bad at?
Where does my work touch revenue, retention, or risk?”
Then, pick one use case in each layer and prototype it with off-the-shelf tools. Show a quick win, get some positive momentum, and then use that credibility to push for strategic investment.
5. Shadow AI is already in your org.
Your job is to shape it before it becomes a problem you’re forced to clean up.
One of the slides I showed was the “AI iceberg”: visible adoption (10–20%) vs. hidden, unregulated use (the vast majority below the surface).
The 2024 Work Trend Index supports this: 78 percent of employees using AI admit they brought their own tools to work (BYOAI), often without employer approval.
That means many firms already have “shadow AI," but few have any governance, policy, or oversight.
As a leader or soon-to-be leader, you get to decide:
Write an AI manifesto (what’s on-limits, what must a human review)
Create a lightweight policy (approved tools, data types, red lines)
Invest in cross-org AI literacy
Assign ownership (who’s accountable)
Sanction a sandbox (isolated environment for safe experiments before production roll-out).
If you don’t, you’re just waiting for compliance, data leakage, or legal trouble to force a reaction. And that’s the worst possible timing in a supercycle.
Which is why the edge goes to anyone willing to get ahead of this curve rather than be dragged by it.
At the close of the talk, I told them this, and I’ll emphasize the same to you:
You are not late.
You're early. But we’re on the edge of a major re-rating of value in the workforce.
Most orgs are still arguing whether AI is valid, a good fit, or worth all the change management effort. The smart ones are already reorganizing around it.
If you can:
Understand what today’s AI really is (and isn’t)
See past the hype and the horror stories
Climb the AI pyramid in your own role
Push your org toward a manifesto + policy + sandbox
You won’t just “keep up.”
You’ll be the person everyone else forwards this newsletter to and says, “This is exactly what we should be doing.”
Find your next edge,
Eli
Want help applying this to your product or strategy? We’re ready when you are → Let's get started.