McKinsey Says Rewiring Work Beats Hoarding AI Pilots

State of AI reports show experiments everywhere and P-and-L impact nowhere. Swap redundant pilots for CC-able specialist inboxes tied to real handoffs.

McKinsey’s QuantumBlack State of AI work keeps documenting the same uncomfortable gap: lots of experiments, not enough scaled impact. Their headline is operating model change, not a newer foundation model (<a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai" target="_blank" rel="noopener noreferrer">McKinsey State of AI</a>). The McKinsey operations blog on generative AI productivity is even blunter: upside stalls until workflows redesign around how work actually flows (<a href="https://www.mckinsey.com/capabilities/operations/our-insights/operations-blog/gen-ais-productivity-promise-huge-potential-but-most-have-not-yet-reached-scaled-impact" target="_blank" rel="noopener noreferrer">McKinsey on scaled gen AI impact</a>). Harvard Business Review’s analysis of how teams spend time “saved” by generative AI shows minutes returning as rework when responsibilities stay fuzzy (<a href="https://hbr.org/2025/03/how-is-your-team-spending-the-time-saved-by-gen-ai" target="_blank" rel="noopener noreferrer">HBR on time saved by gen AI</a>). NIST’s AI Risk Management Framework pushes teams to measure outcomes in business context, not leaderboard accuracy (<a href="https://www.nist.gov/itl/ai-risk-management-framework" target="_blank" rel="noopener noreferrer">NIST AI RMF</a>). OECD firm surveys mirror the same managerial and skills bottlenecks (<a href="https://www.oecd.org/en/publications/the-adoption-of-artificial-intelligence-in-firms_f9ef33c3-en.html" target="_blank" rel="noopener noreferrer">OECD AI adoption in firms</a>).

Why another pilot is not rewiring

Pilots are safe. Rewiring is political. It means retiring redundant tools, changing who approves what, and writing down the handoffs. If you only add models, you get faster chaos.

The lowest-friction rewiring move: specialist inboxes

via.email treats each agent address as a micro-service workers compose by CC. You email hundreds of built-in specialists across departments—see https://www.via.email/agents—or create custom agents with create@via.email. Each reply is generated by an LLM with a fixed expert prompt inside the thread you already use for approvals.

Agents that match transformation communications and operating cadence:

Tier-dependent features like attachments are listed at https://www.via.email/pricingvia.email does not access your inbox or run autonomous multi-thread workflows—it processes what you send to each agent in-thread.

Practical next step

Sunset three redundant pilot tools. Replace them with five high-use agent addresses tied to real workflows: weekly steering prep (Prep Meeting Brief), executive summaries (Distill to Three), and accountable follow-ups (Extract Action Items). Measure rework hours HBR warns about—not model counts.

Related reading

We unpacked why productivity breaks after too many AI tools, how context switching bleeds GDP-scale time, and what email ROI looks like when you count hours. Consultants already live in client mail—briefs from threads are the same rewiring pattern with a different letterhead.

Rewiring is not a keynote theme. It is who gets CC’d. Put the specialists on SMTP, and the org chart stops pretending chat tabs equal change.

What is via.email?

AI agents that each lives at an email address. Just send an email to get work done. No apps. No downloads.

How to use?

Send or forward emails to agents and get results replied. Try it without registrations. Join to get free credits.

Is it safe?

Absolutely, your emails will be encrypted, deleted after processing, and never be used to train AI models.

More power?

Upgrade to get more credits, add email attachments, create custom agents, and access advanced features.