OECD Says Skills Not Models Stall Enterprise AI Adoption
Your pilot did not break because GPT forgot math. It broke because nobody paid the training and workflow tax—and email is the cheapest place to start collecting wins.
Your generative AI pilot did not fail because the model is dumb. It failed because nobody budgeted for the boring part: teaching people how to use it, redesigning workflows, and paying the integration tax. The OECD has been saying this plainly in its firm-level adoption work, and McKinsey keeps echoing it in operations commentary: the binding constraint is rarely raw model quality. It is skills, change management, and how much new software surface area you force onto a workforce that already lives in email.
Why capable models still stall inside companies
The OECD’s synthesis on AI adoption in firms stresses skills shortages, integration costs, and the need for training partnerships—not a shortage of vendor demos. Their companion work on generative AI, productivity, and entrepreneurship makes the same point in academic terms: benefits hinge on complementary investments and user fluency, not on flipping a license switch. If you want receipts, start with the OECD’s firm adoption report (<a href="https://www.oecd.org/en/publications/the-adoption-of-artificial-intelligence-in-firms_f9ef33c3-en.html" target="_blank" rel="noopener noreferrer">read it here</a>) and the generative AI productivity paper (<a href="https://www.oecd.org/en/publications/the-effects-of-generative-ai-on-productivity-innovation-and-entrepreneurship_b21df222-en.html" target="_blank" rel="noopener noreferrer">read it here</a>).
McKinsey’s operations writing on scaled generative AI impact lands in the same zip code: huge potential, but most organizations have not rewired workflows enough to capture value (<a href="https://www.mckinsey.com/capabilities/operations/our-insights/operations-blog/gen-ais-productivity-promise-huge-potential-but-most-have-not-yet-reached-scaled-impact" target="_blank" rel="noopener noreferrer">McKinsey on scaled impact</a>). That is not an indictment of your IT team. It is arithmetic. Every new pane of glass adds training minutes, help-desk tickets, and shadow workflows where people forward threads to each other because the official tool never quite fit.
Which employees get left out when the interface changes
The World Economic Forum’s Future of Jobs reporting keeps highlighting reskilling timelines that outrun vendor hype cycles (<a href="https://www.weforum.org/publications/the-future-of-jobs-report-2025/" target="_blank" rel="noopener noreferrer">WEF Future of Jobs 2025</a>). Pew’s science and society work on ChatGPT adoption shows uneven personal familiarity with AI assistants, which is a decent proxy for how uneven enterprise rollouts feel team by team (<a href="https://www.pewresearch.org/science/2025/04/28/americans-use-of-chatgpt-is-growing-especially-for-learning/" target="_blank" rel="noopener noreferrer">Pew on ChatGPT use</a>). Stanford HAI’s AI Index is useful when you need cross-country adoption stats instead of vibes (<a href="https://hai.stanford.edu/ai-index" target="_blank" rel="noopener noreferrer">Stanford HAI AI Index</a>).
Translation: the employees who struggle are not lazy. They are busy. The moment you ask them to learn another login, another prompt pattern, and another upload path, you are competing with the inbox they already know by muscle memory.
How email lowers the training burden without bypassing governance
Email is not glamorous. It is also the most widely practiced collaboration protocol on the planet. MIT’s experimental work on ChatGPT and professional writing showed how fast people adopt assistance when it slots into tasks they already perform (<a href="https://news.mit.edu/2023/study-finds-chatgpt-boosts-worker-productivity-writing-0714" target="_blank" rel="noopener noreferrer">MIT productivity study</a>). That is the design lesson: meet people inside the ritual they already trust, then add intelligence there.
That is the via.email bet in one sentence. via.email is an email-based AI agents platform: you mail specialized agents at dedicated addresses, each powered by an LLM with a fixed expert prompt. No new dashboard to master before you get value—just the same To, Subject, Body flow you have used for twenty years. You can browse hundreds of built-in agents across departments at https://www.via.email/agents, add one by emailing add@via.email with the agent in CC, or roll your own with create@via.email.
What to measure beyond vanity logins
Harvard Business Review has been asking leaders how teams actually spend the time “saved” by generative tools, which is a polite way of saying: if you do not redesign work, you just get faster chaos (<a href="https://hbr.org/2025/03/how-is-your-team-spending-the-time-saved-by-gen-ai" target="_blank" rel="noopener noreferrer">HBR on time saved by gen AI</a>). Microsoft’s narrative about Copilot and the “infinite workday” is really an admission that message volume is not going back down (<a href="https://www.microsoft.com/en-us/microsoft-365/blog/2025/06/26/how-microsoft-365-copilot-and-agents-help-tackle-the-infinite-workday/" target="_blank" rel="noopener noreferrer">Microsoft on Copilot and workload</a>). So pick metrics that map to decisions, not demos.
Three starter metrics that survive board scrutiny:
- Minutes from raw email to approved output, split by team and seniority.
- Percentage of requests handled inside the thread the human already owns, versus escalations to “the AI tool owner.”
- Quality checks you already trust—legal review, finance sign-off, customer satisfaction—not just “messages sent.”
Practical next step: three agents, one shared mailbox
Before you buy another platform seat, try this: pick three high-volume workflows—distilling long threads, extracting action items, turning finalized text into a clean PDF—and route them through mailable specialists for two weeks.
- Distill to Three —
distill.to.three@via.email - Extract Action Items —
extract.action.items@via.email - Convert to PDF —
convert.to.pdf@via.email
You are testing whether removing interface friction changes adoption, not whether a model can pass an exam.
How this connects to the wider inbox story
If your organization is already feeling tool sprawl, you are not imagining it. We have written about the tax of context switching, why productivity breaks when you add too many AI tools, and what one interface buys you when everything else is tabs and notifications. When AI speeds up sending, the honest follow-up is processing, which is why when AI intensifies work, a single inbox discipline matters more than another sidebar.
The OECD is not telling you to slow down AI. It is telling you to stop pretending adoption is a model download. Put the help where the work already happens—inside email—and you shrink the skills gap by shrinking the training surface, not by wishing humans had more hours in the day.