Implementing AI in Organizations: A Practitioner’s Guide
The hard part of AI isn’t the AI—it’s everything else.
Author’s Note: This summer at Harvard I will be teaching two courses: (1) Management Consulting in the Age of AI and (2) Innovating with Generative AI for Leaders and Managers. This essay is adapted from one of my course lectures[1].
If you would like to receive email updates when I publish new material, please subscribe to my substack (it is free) and also provides access to my archive.
A general practitioner (GP) in the Netherlands runs two clinics serving 4,300 patients. Her problem isn’t information—it’s time. Every day brings a torrent of messages, follow-ups, and care-plan questions requiring both clinical accuracy and patient-friendly language. She didn’t respond by “deploying AI”—a phrase that explains nothing and excuses everything. She redesigned the workflow.
She built custom AI assistants that draft responses anchored in national guidelines. But she stays responsible: She reviews, edits, and personalizes each message before it goes out. She uses the system to adapt content for different reading levels and languages, and to translate protocols into training checklists for nurses. The system isn’t replacing clinical judgment; it’s scaling it. The result: Higher-quality communication, more consistent execution, and renewed capacity in a constrained environment (OpenAI, 2026).
This is what AI implementation as augmentation looks like when it works—and it's rare. The distance between what's technically possible and what organizations actually achieve is the central problem this series addresses, and this gap is significant.
The Problem: A Widening Gap Between Capability and Use
There’s a growing consensus that generative AI is a general-purpose technology—placing it in the same category as electricity and the steam engine. These are the technologies that come along and touch everything. The implications are vast: AI has the potential to reshape activities across huge swaths of the economy, and we have not seen change at this scale in our lifetimes (Calvino, Haerle & Liu, 2025; Eloundou et al., 2024).
Some estimates suggest AI could double productivity growth, and these projections assume AI behaves like a “normal” technology. If AI turns out to be more transformative than that—and it might—the actual changes (to the economy, society, and more) could be far larger (Brynjolfsson, Korinek & Agrawal, 2025).
The last time the US economy experienced this kind of transformational change was in the early 20th century; to get an idea of what that was like, please see: The Coming AI Disruption: Lessons from 1899–1929.
We’re starting to see early evidence of this. In customer service, AI assistance boosted productivity by 14% on average—and by 34% for novice workers, compressing months of learning into weeks (Brynjolfsson, Li & Raymond, 2025). AI coding assistants are reporting efficiency gains in the 20–50% range, though results vary by context and how you define “done” (Anthropic, 2025; VentureBeat, 2024). One widely cited estimate predicts that 80% of the U.S. workforce could see at least 10% of their tasks affected (Eloundou et al., 2024).
And we’re only three years into a transformation that has decades to run.
And yet.
Your organization probably has access to AI tools far more capable than those your teams actually use.
Your organization probably has access to AI tools far more capable than those your teams actually use. OpenAI calls this the capability overhang: The gap between what AI can do and what employees do with it (OpenAI, 2026). The biggest productivity gains in the short run will come from closing this gap—which keeps widening as the models improve.
“As AI capabilities have improved, we see a widening ‘capability overhang,’ defined as the gap between what AI tools can do and how typical users are using them …The typical power user uses 7x more thinking capabilities than the typical user.” (OpenAI, 2026)
This isn’t a technology problem. It’s a leadership challenge—about workflow redesign, culture change, and change management. The admission ticket is Access—having the tools available, connected to your data, and blessed by IT. But it can be like giving everyone a gym membership, and being surprised when nobody gets fit.
What actually produces results is Agency—a term worth defining explicitly:
Access = the tools exist; people are allowed to use them.
Agency = people have the skill to use them well, the incentives to bother, permission to experiment, and workflows that make AI useful rather than awkward.
Think of this distinction as the difference between a high-end gym membership and an actual fitness transformation: While a keycard provides Access, it’s the structured training program, expert coaching, and personal discipline that truly constitute Agency.
Without Agency, employees either underuse the tools or use them in the shadows. Some hide AI use for image reasons—visible reliance can be read as low capability or weak judgment. That blocks knowledge-sharing and prevents teams from learning together (Almog, 2025). At a minimum, leaders need to celebrate transparency and experimentation. (They should be doing far more—but that’s for a later essay.)
The Dutch GP has both Access and Agency. She redesigned her workflow, built custom assistants for her context, and stays responsible for the output. Most organizations stop at Access, then wonder why adoption stalls.
The distinction grows more urgent as AI grows more capable. Today’s frontier models handle tasks that would have taken meaningful chunks of expert time two years ago. The pace appears to be accelerating—some analyses suggest the “length” of tasks these systems can reliably handle is expanding on the order of months, not years (OpenAI, 2026). As AI moves from answering questions to taking actions—planning, executing, coordinating—the gap between organizations that provide tools, and those that redesign workflows, will widen faster.
Whatever you learn about AI implementation this year will need updating next year. Without standing systems for refreshing training, workflows, and governance, the overhang keeps widening.
No company will go bankrupt because AI exists. Plenty will go bankrupt because their competitors learned to use it first.
The Strategic Choice: Augmentation Creates More Value
Implementing AI at scale requires redesigning workflows. But workflow redesign involves a fundamental choice—one that determines not just productivity, but what kind of organization you’re building.
You can design systems that augment workers—expanding what people can do, lifting your lowest performers toward the median. I call this inclusive acceleration [2]. Or you can design systems that replace workers—capturing labor savings by removing tasks from the human workflow. Erik Brynjolfsson calls the extreme version of that path the Turing Trap (Brynjolfsson, 2022; Strauss, 2026).
The case for augmentation isn’t just about avoiding bad social outcomes. It’s the better value-creation strategy.
Brynjolfsson offers a thought experiment: Imagine ancient Greeks had automated every existing job—herding sheep, making pottery, weaving tunics. Productivity soars. Living standards stay stuck at clay pots and horse-drawn carts. Most of the value our economy has created since ancient times comes from new goods and services—not cheaper versions of existing ones (Brynjolfsson, 2022). Sixty percent of people today work in occupations that didn’t exist in 1940 (Autor, 2015).
The pattern repeats closer in time. “Computer” used to mean a person with a mechanical calculator. Programmers used punchcards into the 1970s. At each transition, companies faced a choice: Swap in new technology to do the same work cheaper—or rethink what became possible. The ones that merely swapped? You’ve never heard of them. Survivorship bias is a hell of a teacher.
Brynjolfsson’s key insight: The set of tasks humans and machines can do together vastly exceeds what machines could automate alone. Deploy AI to cut headcount and you capture a fraction of its value. Deploy it to expand what your people can do—and you’re playing a much larger game.
One path optimizes this quarter’s labor costs. The other builds the capabilities that define the next decade.
The Pipeline Risk: Why Substitution Compounds Over Time
The Turing Trap isn’t just about foregone value creation. It’s about slower-moving damage that’s easy to miss until it’s too late.
When AI substitutes for human work rather than augmenting it, the pathways that build capability—apprenticeship, learning-by-doing, mentorship—start to disappear. Junior workers stop getting the practice they need. Senior workers stop having anyone to coach. The tasks that seemed like drudgery were often the curriculum.
This is the organizational equivalent of eating your seed corn.
The early evidence is sobering. Early-career workers (ages 22–25) in the most AI-exposed U.S. occupations have seen employment fall roughly 13% relative to less-exposed occupations since late 2022—even after controlling for firm-level shocks. Older workers in the same roles held steady or rose (Brynjolfsson, Chandar & Chen, 2025). AI is disrupting the entry-level pipeline first—precisely where the next generation of experts is supposed to be forming.
But this isn’t inevitable. It’s a choice. The same technology that can replace a junior analyst can make that analyst dramatically more capable while they’re still learning. The same system that could eliminate entry-level research tasks could compress time-to-proficiency while preserving learning loops. The question is which path you choose—and whether you’re paying attention to what happens on the first few rungs.
A Practical Test: Opportunity Expansion—or Ladder Removal?
Augmentation [3] expands human capability—helping people do higher-quality work, learn faster, and tackle harder problems. Substitution [3] replaces human effort—automating tasks in ways that reduce the need to develop or reward the underlying skill.
Real deployments are rarely pure. Many systems speed up routine work while quietly shifting judgment and learning out of the human loop. Bad automation often looks like “confidence theater”: the system sounds sure, and humans stop practicing judgment.
Also—please stop treating “human in the loop” like a Harry Potter spell. If your highly paid consultants put “human in the loop” on a slide as the solution—without explaining the failure modes—ask for a refund.
Automation bias is real—and perversely, the more accurate AI gets, the more you have to pay humans to catch the rare mistakes (Parasuraman & Riley, 1997; Bastani et al., 2025). And Human+AI ≠ better than AI. Augmentation (human+AI beats human) is common. True synergy (human+AI beats AI) often fails because humans can’t reliably spot errors and end up adding noise (Fernandes, 2026).
I will come back to the challenge of Managing Human + AI To Create the Greatest Value in a later essay.
For today, my advice is: Don’t over-theorize it. Run a simple field test: Is this AI implementation expanding opportunity—or removing rungs from the ladder?
Three questions can help:
For individual contributors: Are you managing the AI, or is it managing you? Augmentation expands your autonomy and builds your judgment. Substitution often turns you into a button-pusher monitoring outputs you increasingly don’t understand.
For managers: Is AI helping you coach, or replacing the need to? If the practice opportunities that build skill now flow through the model instead of through your people, that’s ladder removal—and you’ll need to compensate, or you won’t have the talent you need in three years.
For leaders: Have you created the permission environment, incentives, and workflows so your team members have true Agency and can thrive in the world of AI? If you are optimizing for immediate throughput while quietly draining the human pipeline, you are creating an organizational debt for the future. And, if you haven’t provided an environment that fosters Agency, don’t blame the managers and team members when AI fails to live up to its potential.
Sometimes augmentation won’t be optional. Elevator operators disappeared because the technology removed the need for the role. Some occupations will disappear.
But most white-collar roles are more complicated than those of elevator operators. They’re bundles of tasks—some automatable, some developmental, some judgment-heavy. Design choices matter. You can automate away the practice that creates expertise—or you can use AI to make people better while they’re building it. Companies that make the second choice will win.
What’s Ahead
A friend of mine—CEO of a mid-sized business—asked his most experienced (and expensive) employees to document their work. He didn’t mention this was a prelude to automation. The documentation came back thin, vague, and curiously missing the tacit knowledge that made these workers valuable. His employees understood exactly what was happening and responded rationally.
He may eventually get his automation. But he’s now in an adversarial dynamic with the very people whose knowledge he needs—a tax on implementation he’ll be paying for years.
This is the legitimacy problem. Choices that are transparent and expand opportunity build trust. Choices that visibly eliminate pathways invite resistance. Inclusive acceleration isn’t just better strategy. It’s what makes implementation work.
This series will offer practical guidance for closing the capability overhang, while avoiding the Turing Trap. (Hint: don’t do what my friend did.)
Future essays will address topics such as:
What Should People Learn? Training that works—three levels of AI fluency, why organizations stop at basic prompting, and the change management problems that kill adoption.
How Should Work Get Done? Workflow redesign, particularly as agentic AI rolls out—selecting use cases, best practices in piloting AI, and how to make human oversight feasible when AI is “almost always right.” And, of course, how to handle the change management issues.
Are We Building or Destroying Capability? Who should own AI transformation, how to protect the apprenticeship ladder, and what to learn from the shadow AI employees are already using.
Governance That Learns. Vendor strategy, security and compliance, measuring inclusive acceleration, and what a documented AI strategy should contain.
The hard part of AI isn’t the AI—it’s everything else. The choices you make now will shape whether AI expands your organization’s capability—or quietly hollows it out.
What to Ask Monday Morning
For every AI use case on your roadmap:
Whose learning loop shrinks if this works perfectly?
What tacit knowledge stops being produced—and how will we replace it?
AI is prone to often being wrong, but always certain —and humans are prone to automation bias (see above). How will you address these issues and related issues? Also AI is in its early stages, when egregious errors happen (and they will), how will you recover (Bastani et al, 2025;Li et al., 2024; Han, 2025)?
Remember - the tasks that feel like drudgery are often the curriculum. Know what you’re cutting before you cut it.
Endnotes
This essay complements my separate essay series on “AI’s Two Forces”. While that series addresses the bigger-picture policy implications of AI—for governments, societies, and markets—this one provides actionable advice for managers and organizational leaders. For purposes of these essays, I am treating AI as a “normal technology” as defined by Narayanan and Kapoor (2025): Powerful and consequential, but subject to the same implementation challenges, organizational dynamics, and human factors as any significant technological change.
A note on terminology: I’ve elsewhere described AI as exerting two divergent forces—a “gravitational pull” that compresses skill gaps and democratizes access, and a “centrifugal push” that concentrates advantage and hollows out pathways. “Inclusive acceleration” is this series’ operational term for designing toward the gravitational pull; the “Turing Trap” captures the centrifugal risk. Different language, same underlying framework. I discuss these issues in detail in Strauss (2026), “AI’s Two Forces: Are You Building Capability—or Eliminating the Ladder?”
If you want something a bit more formal: Augmentation AI refers to technologies that complement workers by enhancing their productivity in existing tasks or enabling them to perform new, higher-value tasks—as opposed to Automation AI, which directly substitutes for human labor by performing tasks previously done by workers—reducing labor demand for those tasks in task execution (Marguerit, 2025).
Steven Strauss, Ph.D. teaches, writes, and advises on leadership, strategy, organizational change, and emerging technologies—with a particular emphasis on generative AI and its implications for institutions and work. In the summer of 2026, Strauss will teach two courses at the Harvard University Summer School: Management Consulting in the Age of AI, and Innovating with Generative AI for Leaders and Managers. He served as the John L. Weinberg/Goldman Sachs Visiting Professor at Princeton University (2014–2025) and previously taught at Harvard Kennedy School. Prior to academia, he worked at McKinsey & Company in the London office. Prior to McKinsey, Strauss worked in investment banking and capital markets. He also served as Managing Director at the New York City Economic Development Corporation under Mayor Mike Bloomberg, where he led and supported major economic transformation and innovation initiatives, including leading the work that resulted in the Applied Sciences NYC effort (Cornell Tech). In 2012, he was a Fellow at Harvard’s Advanced Leadership Initiative. Dr. Strauss earned a Ph.D. in Management from Yale University.
Bibliography
Almog, D. (2025, November 23). Barriers to AI Adoption: Image Concerns at Work. Job market paper, Kellogg School of Management, Northwestern University. https://www.daphnealmog.com/research
Anthropic. (2025). How AI is transforming work at Anthropic. https://www.anthropic.com/research/how-ai-is-transforming-work-at-anthropic
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3–30. https://doi.org/10.1257/jep.29.3.3
Bastani, H., & Cachon, G. P. (2025, December 24). The Human–AI Contracting Paradox. Working paper, Wharton School, University of Pennsylvania. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5066498
Brynjolfsson, E. (2022). The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence. Stanford Digital Economy Lab. https://human-centered.ai/turing-trap/
Brynjolfsson, E., Chandar, B., & Chen, R. (2025). Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. Stanford Digital Economy Lab working paper. https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine-six-facts-about-the-recent-employment-effects-of-artificial-intelligence/
Brynjolfsson, E., Korinek, A., & Agrawal, A. K. (2025). A Research Agenda for the Economics of Transformative AI. NBER Working Paper No. 34256. https://doi.org/10.3386/w34256
Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at work. Quarterly Journal of Economics, 140(2), 889–942. https://doi.org/10.1093/qje/qjae044
Calvino, F., Haerle, D., & Liu, S. (2025). Is Generative AI a General Purpose Technology?: Implications for productivity and policy. OECD Artificial Intelligence Papers, No. 40. https://doi.org/10.1787/704e2d12-en
De Simone, M., et al. (2025). From Chalkboards to Chatbots: Evaluating the Impact of Generative AI on Learning Outcomes in Nigeria. World Bank Policy Research Working Paper No. 11125. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099011425112521588
Deming, D. J. (2021). The Growing Importance of Decision-Making on the Job. NBER Working Paper No. 28733. National Bureau of Economic Research. https://www.nber.org/papers/w28733
Deming, D. J., Ong, C., & Summers, L. H. (2025). Technological Disruption in the Labor Market. NBER Working Paper No. 33323. https://doi.org/10.3386/w33323
Eloundou, T., Manning, S., Mishkin, P., & Rock, D. (2024). GPTs are GPTs: Labor market impact potential of LLMs. Science, 384, 1306–1308. https://doi.org/10.1126/science.adj0998
Fernandes, D., Villa, S., Nicholls, S., Haavisto, O., Buschek, D., Schmidt, A., Kosch, T., Shen, C., & Welsch, R. (2026). AI makes you smarter but none the wiser: The disconnect between performance and metacognition. Computers in Human Behavior, 175, 108779. https://doi.org/10.1016/j.chb.2025.108779
Han, J. (2025). Trust formation, error impact, and repair in human–AI financial advisory: A dynamic behavioral analysis. Behavioral Sciences, 15(10), 1370. https://doi.org/10.3390/bs15101370
Kestin, G., Miller, K., Klales, A., Milbourne, T., & Ponti, G. (2025). AI tutoring outperforms in-class active learning. Scientific Reports, 15, 17458. https://www.nature.com/articles/s41598-024-74384-2
Li, J., Yang, Y., Zhang, R., & Lee, Y.-C. (2024). Overconfident and Unconfident AI Hinder Human–AI Collaboration. arXiv preprint. https://arxiv.org/abs/2402.07632
Marguerit, D. (2025). Augmenting or Automating Labor? The Effect of AI Development on New Work, Employment, and Wages. LISER. https://arxiv.org/abs/2503.19159
Narayanan, A., & Kapoor, S. (2025). AI as Normal Technology. Knight First Amendment Institute (Report No. 25-09). https://knightcolumbia.org/content/ai-as-normal-technology
OpenAI. (2026, January). Ending the Capability Overhang: AI is advancing fast and countries must better use today’s capabilities to close the gap. https://cdn.openai.com/pdf/openai-ending-the-capability-overhang.pdf
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253. https://doi.org/10.1177/001872089703900202
Strauss, S. (2025). The Coming AI Disruption: Lessons from 1899–1929. Steven Strauss’s Notebook (Substack). https://stevenstrauss.substack.com/p/the-coming-ai-disruption-lessons
Strauss, S. (2026). AI’s Two Forces: Are You Building Capability—or Eliminating the Ladder? Steven Strauss’s Notebook (Substack). https://stevenstrauss.substack.com/p/ais-two-forces-are-you-building-capabilityor
VentureBeat. (2024, December 23). The code whisperer: How Anthropic’s Claude is changing the game for software developers. https://venturebeat.com/ai/the-code-whisperer-how-anthropics-claude-is-changing-the-game-for-software-developers



