Executive Summary
Most organizations are asking AI the wrong question.
Most organizations look to Artificial Intelligence (AI) as the great headcount reducer. But headcount reduction is not the real promise of AI. The prevailing conversation in boardrooms and budget meetings fixates on headcount: Which roles will AI eliminate? Which departments can be reduced? How quickly can we offset labor cost against technology spend? This framing is not only strategically limited, but also analytically flawed. It assumes that headcount elimination is the only solution to the only problem. That mindset is not the right place to start.
A rigorous new study from the Special Competitive Studies Project examined how artificial intelligence might affect the United States Army Officer Corps—one of the most complex, diverse, and performance-driven workforces in the world. The researchers did not ask which roles AI would replace. They asked something far more useful: which specific tasks within each role are exposed to AI, and what does that mean for how the work should be organized?
The findings are significant. AI was found to affect between 25% and 64% of the task content across 131 officer specialties, and approximately 80% of those specialties have 40% or more of their daily workload exposed. Critically, this did not produce a conclusion of how officers’ should be eliminated. It produced a conclusion that how officers work must fundamentally change.
| 25–64%of task content affected by AIAcross 131 Army officer specialties | ~80%of specialties have 40%+ AI exposureOf daily workload exposed | 131officer specialties studiedU.S. Army Officer Corps |
That distinction between tasks and roles, and between redesign and elimination, is the most important strategic insight business leaders can take from this research. Organizations that internalize it will build more effective, more adaptable teams. Those that miss it will eliminate roles and/or bolt AI tools onto broken workflows and wonder why the returns fall short.
AI does not eliminate roles. It rewrites the task composition of roles. The organizations that understand this will redesign their workforces. The rest will just add software.
The Question Most Companies Are Getting Wrong
Ask a leadership team how they’re planning for AI, and the conversation will drift toward reduction. Which functions can we automate away? How many of these roles still make sense in two years? What’s the headcount case for the investment?
Understandably, these are budgetary questions. Since AI procurement decisions require financial justification, labor cost becomes the most justifiable lever in the model. On paper, this looks like the right decision. But optimizing for headcount reduction misses the deeper structural opportunity that AI actually creates.
This approach is incorrectly rooted in the belief that a job is a monolithic unit of work. In practice, every role is a portfolio of tasks: discrete, categorizable activities that vary significantly in their cognitive demands, their susceptibility to automation, and their value to the organization.
A sales executive is not simply “a salesperson.” They move across a distinct set of activities each day:
- Researching prospects
- Drafting outreach
- Reviewing pipeline data
- Running discovery calls
- Aligning internally on proposals
- Building client presentations
- Managing existing relationships
Each of those activities has a different relationship to AI. Some are highly automatable. Some require human judgment that AI cannot replicate. And some sit in between, where AI augments but does not replace. The organizations that map that distinction at the task level will make far better decisions about where AI creates leverage and where it does not.
When AI enters that picture, it does not eliminate the salesperson. It dramatically changes the proportion of time they spend on each activity. The research work becomes faster. The draft messages get scaffolded. The pipeline analysis runs in the background. What remains and what becomes more valuable is the judgment, the relationships, and the contextual reading of a live conversation that no model currently replicates.
The Right Frame: Task Exposure, Not Job Elimination
The Army study authors make this point explicitly and rigorously. Their foundational premise: AI systems are designed to perform specific cognitive functions, such as classifying information, summarizing documents, optimizing scheduling, and drafting communications, rather than replicating the full complexity of a human role within an organizational context.
Their methodology, which we will examine in the next section, operationalizes this insight into a quantifiable framework. The result is a way of talking about AI impact that is both more accurate and more actionable than the standard headcount narrative.
For commercial organizations, this reframe has direct consequences for strategy. If the question shifts from “How many roles do we reduce?” to “How do we redesign roles given what AI now handles?”, the whole planning exercise changes. Talent strategy, hiring criteria, training investment, team structure, performance metrics needs to be reconsidered through a task-level lens.
A Methodology Worth Stealing
The Army study’s approach is the most rigorous publicly available framework for quantifying AI’s impact on knowledge work roles, and its insights translate directly to commercial workforce planning.
Step One: Decompose Roles Into Standardized Work Activities
The researchers drew on the U.S. Department of Labor’s O*NET database, which defines 41 high-level work activities that together describe the full range of human cognitive and physical labor. Activities include things like:
- Getting information
- Analyzing data or information
- Making decisions and solving problems
- Communicating with supervisors and peers
- Organizing and prioritizing work
- Documenting and recording information
For each activity, O*NET captures two dimensions: importance (how central the activity is to the role) and complexity (how sophisticated the execution needs to be). Together, these two variables determine how much a given activity drives the overall character of a role, and how much AI can realistically affect it.
Step Two: Assess AI Capability Against Each Activity
The researchers conducted a structured literature review to determine which of the 41 work activities are meaningfully affected by AI tools currently available or in active commercial development. They identified 23 of 41 activities as having significant AI impact at either moderate or high complexity levels.
Critically, the framework accounts for complexity thresholds. If a work activity is important to a role but performed at a level of complexity that exceeds what current AI models can handle, it is not counted as impacted. A junior analyst reviewing quarterly data is differently situated than a CFO building a multi-scenario capital allocation model, even though both are “analyzing data.” The framework distinguishes these cases.
Step Three: Calculate a Weighted AI Impact Percentage
The weighting is what makes this framework credible. It’s not asking what AI could theoretically do. It’s asking what AI does to the work that actually drives the role.
Applying This Framework to Commercial Roles
The same O*NET taxonomy that the Army used applies directly to commercial occupations. The framework can be used as is for any role with a civilian equivalent, which is the vast majority of knowledge work positions.
A go-to-market team can map Sales Development Representatives, Account Executives, Customer Success Managers, and Revenue Operations Analysts to their O*NET equivalents and run the same calculation. The result is a role-by-role view of where AI changes the task composition of work and by how much.
This is a fundamentally different planning input than the outputs of a generic AI readiness survey or a vendor-sponsored capability assessment. It grounds the conversation in actual work activities, not abstract potential.
What the Data Reveals
The study’s results across 131 Army officer specialties carry several findings that translate with force to private-sector workforce planning.
Finding 1: No Role Is Immune
Every single officer specialty analyzed showed meaningful AI exposure. The range, from a low of 25% for Infantry Officers to a high of 64% for Family Medicine Physicians, confirms that AI’s reach extends across the full spectrum of professional work. This spans the most cognitively intensive knowledge roles to those with significant physical and operational dimensions.
The floor of 25% is itself a significant number. Even in roles where most of the work is physical, relational, or operates under extreme uncertainty, like a combat infantry officer, one quarter of daily activities are exposed to AI influence. In commercial knowledge work, where the physical dimension is largely absent, exposure rates will generally be higher.
Finding 2: The Distribution Is Heavily Skewed Toward High Exposure
The most striking finding from the Army data is not the range, it is the distribution within that range. Approximately 80% of officer specialties have 40% or more of their daily workload exposed to AI impact. The midpoint of the distribution sits well above the 25% floor.
For commercial knowledge work, roles built predominantly around information processing, communication, analysis, and coordination, the equivalent figures would likely be higher still. A civilian consulting analyst, revenue operations leader, or customer success manager is not going to come in at 25%. Their task exposure is concentrated precisely in the areas AI is most capable of affecting.
Finding 3: AI Changes Composition, Not Just Volume
One of the study’s clearest messages is that AI’s impact is compositional. It does not simply make existing work faster, it shifts the internal mix of activities that define a role. As tasks that are automatable or augmentable shift toward AI, the residual human work becomes more concentrated in judgment, relationships, leadership, and contextual reasoning.
In practical terms, this means the nature of individual contribution is changing. An individual contributor who once spent the full day doing the work, writing the code, building the presentation, drafting the follow-up emails, compiling meeting notes, can now outsource significant portions of those tasks to AI tools and agents. That work now represents a fraction of their day. This creates two new realities. First, the individual needs a new skill set: the ability to orchestrate those tools effectively. Done blindly, AI produces bad output. The person who directs it has to know what good looks like and how to get there. Second, there is a substantial amount of recovered time. The organizations that figure out how to channel that time toward strategic, higher-value work, rather than simply absorbing it into more busywork, will be the ones that realize the full return on their AI investment.
This has a specific implication for talent: the skills that make someone excellent in an AI-augmented role are not the same skills that made someone excellent in the pre-AI version of that role. A sales rep who was exceptional at crafting prospecting emails from scratch may be less differentiated in a world where AI handles the first draft. What matters more is the quality of their judgment about which prospects warrant deeper investment and their ability to build trust in a live conversation.
Finding 4: Wartime Tasks Are More AI-Exposed Than Peacetime Tasks
Perhaps the most counterintuitive finding in the study: AI had a higher measured impact on wartime and combat responsibilities than on peacetime and garrison responsibilities for the two combat arms studied (Infantry and Field Artillery).
The explanation is instructive. Under combat conditions, certain cognitive tasks: gathering information rapidly, identifying objects and events, and monitoring dynamic surroundings become dramatically more important. These are precisely the tasks where AI has strong capability. Additionally, the complexity of some tasks decreases under combat stress, because operators must simplify their decision-making to function effectively under duress. This brings more tasks within the range of AI assistance.
Within the commercial analog, AI’s value is often highest during periods of peak operational pressure: when deal volume spikes, a product launch demands rapid coordination, or client crisis requires real-time information synthesis. The organizations that have integrated AI into their workflows before those moments arrive will outperform those still evaluating which tools to pilot.
Four Imperatives for Business Leaders
The study’s recommendations for the Army map directly onto actionable imperatives for commercial organizations. Here is how that translation works in practice:
Where to Start
These imperatives describe what to prioritize. The natural follow-up question is where to begin. Based on how organizations that have navigated this transition most effectively have approached it, the sequence looks like this:
1. Understand what your people actually do. Not what their job descriptions say, but what they actually spend their time on. Run a “day in the life” observation across roles. Watch the work. The gap between the documented role and the real one is almost always wider than leadership expects, and that gap is where the most important planning data lives.
2. Identify your early adopters. Find the people in the organization who are already innovation-tolerant; the ones who are curious, open, and likely already using AI tools on their own. These are your spear tip. They will be the first to test new workflows and provide honest feedback on what works. Do not start with the most resistant part of the organization.
3. Bring in expertise on what can be automated. Someone, internal or external, needs to look at the actual task inventory and assess which tasks are ripe for AI augmentation: which are not yet ready, and which require human judgment regardless. This is a diagnostic step, not a technology deployment.
4. Design and pilot with the early adopter group. Build the new workflows and test them with the small group identified in step two. Validate that the tools work, that the orchestration model makes sense, and that the output quality holds before expanding.
5. Roll out to the broader organization. Once the pilot has been validated, extend the new workflows to the wider team. Use the early adopters as internal champions who can demonstrate what works and help their peers through the transition.
6. Invest in ongoing coaching and training. This is not a one-time change management exercise. People need sustained support in learning how to work with AI tools effectively: how to direct them, evaluate their output, and integrate them into daily work. The organizations that treat this as a training problem, not just a deployment problem, will see materially better results.
The AI Workforce Architecture Imperative
The deepest implication of the Army study is one that most AI implementation discussions never surface: AI adoption is, fundamentally, a workforce architecture imperative.
Technology deployment is the easy part. Computational power is available. Models are accessible. Tooling has commoditized at a remarkable pace. What has not commoditized is the organizational capacity to redesign work around AI capabilities: to think clearly about which tasks should shift, which roles should evolve, which skills become more valuable, and how teams should be structured to take full advantage of what AI can do.
Four Dimensions of the AI Workforce Architecture
Role Definition
As AI assumes a meaningful portion of the task content within roles, job descriptions built around pre-AI task sets become misleading. They will attract the wrong candidates, set the wrong performance expectations, and create development plans that do not reflect the actual work. Role definition needs to be rebuilt from a task-level understanding of what AI handles and what remains uniquely human.
Talent Development
Training programs built to develop skills that AI is absorbing are training programs that are actively losing value. Organizations need to evaluate their talent development investments against a clear map. Which competencies are becoming more important as AI takes on more of the cognitive load, and which are becoming less differentiating? Critical thinking, contextual judgment, complex communication, relationship development, and creative synthesis become more central. Procedural execution and information processing become less so.
Career Pathway Design
Many organizations develop early-career talent through roles that are heavily task-oriented; roles that build foundational skills through repetitive execution. If AI absorbs a significant portion of that task content, the developmental logic of those early-career roles changes. Organizations that do not redesign career pathways around AI’s presence risk two outcomes. Their development pipelines produce less-prepared leaders, or they under-invest in junior roles that no longer look valuable on the surface but remain essential for building judgment.
This shift is already visible in technical organizations, where junior hires are being trained not as traditional hands-on-keyboard practitioners but as AI orchestrators; people whose primary skill is directing, evaluating, and refining AI-generated output rather than producing everything from scratch. The senior talent who once spent their time doing the work now spend it coaching others on how to manage the tools that do it. That pattern will extend well beyond engineering.
Organizational Structure
As tasks are redistributed between humans and AI systems, the case for certain organizational layers weakens. Coordination functions that exist primarily to move information between people may become less necessary when AI provides better information visibility. Middle layers that translate data into summaries for decision-makers may be reduced when decision-makers have direct AI-augmented access to structured analysis. These are not arguments for flattening organizations indiscriminately, they are arguments for rethinking structure based on a clear view of how work actually flows in an AI-enabled environment. The goal is not to flatten organizations indiscriminately. It is to redesign the structure around how AI work actually flows in an AI-enabled environment, not around assumptions that predate it.
The companies that will lead in the next decade are not the ones that deployed AI fastest. They are the ones that redesigned their organizations most deliberately around what AI changed.
What This Means for Go-to-Market Organizations
Changes are, you do not have this capability readily available in-house. You will need to explore third-party experts that can implement the AI framework. Select a partner that works with growth-stage and enterprise organizations on go-to-market strategy, team design, and operational effectiveness. The Army study’s framework maps with particular clarity onto the GTM functions that define revenue performance. These are the topics that must be considered when implementing the new AI framework.
The GTM Stack Is Getting Redesigned Whether You Plan For It or Not
A traditional B2B revenue organization is structured around a set of assumptions about what different roles actually do. SDRs generate pipeline through prospecting activities. AEs convert opportunities through discovery, solution design, and negotiation. Customer Success protects and expands revenue through relationship management and value delivery. RevOps maintains the data and process infrastructure that ties the system together.
AI is altering the task composition of each of these roles. Prospecting research, outreach drafting, and pipeline data analysis, all heavily SDR and RevOps tasks, are among the first to be meaningfully automated or augmented. The question is not whether this happens. It is whether organizations will respond deliberately or reactively.
Three Implications for GTM Leaders
Implication 1: The Case for Large SDR Teams Is Getting Weaker
The tasks that define an early-career SDR role—account research, list building, personalized outreach at scale, meeting scheduling—are precisely the tasks where AI tools are advancing most rapidly. This does not mean SDR functions disappear. It means the ratio of SDRs to AEs that makes sense economically and operationally is shifting. Organizations that hold onto legacy SDR-to-AE ratios because they have always worked that way will carry unnecessary labor costs while competitors who have redesigned the function operate with better efficiency and, in many cases, better pipeline quality.
There is an equally dangerous failure mode on the other side: organizations that over-automate outreach without human oversight. Turning on the machine and blindly sending AI-generated emails at scale does not just waste money, it burns the territory. Prospects who receive generic, poorly targeted messages will remember the bad impression long after the campaign ends. The task does not go away when AI enters the picture. It transforms from writing outreach into orchestrating it: reviewing the output, ensuring quality, and directing the volume toward the right targets so that the freed-up time goes toward building genuine relationships with the prospects who respond.
Implication 2: The Human Premium Moves Up the Funnel
As AI handles more of the early-stage pipeline development work, the relative importance of human judgment and relationship quality at the mid-to-late stage increases. Complex selling techniques, where the human ability to read a room, build trust across a buying committee, and navigate organizational politics is what moves deals, become the primary differentiator. Investment in developing these skills is not optional. It is the core of a durable competitive advantage.
Implication 3: RevOps Becomes a Strategic Function or It Becomes Redundant
Revenue Operations has historically owned the data, the tooling, and the process layer of the GTM organization. AI is transforming all three. Organizations that position RevOps as a backward-looking reporting function will find that AI can handle most of that work more efficiently. Those that reposition RevOps around designing AI-enabled workflows, governing model quality, and translating AI-generated signals into commercial strategy will find that the function becomes more valuable, not less.
The Army study is significant not because it is about the military, but because the military represents one of the most rigorous and high-stakes environments for workforce planning in existence. The U.S. Army operates with decade-long planning horizons, no tolerance for operational failure, and deep investment in human talent development. When an organization like that concludes AI requires a fundamental rethinking of how work is structured, commercial leaders should pay attention.The core insight is straightforward and actionable: stop asking which roles AI will replace and start asking which tasks within each role AI affects and what that means for how those roles should be designed.
That question leads somewhere more useful than headcount reduction models. It leads to a clear-eyed view of where AI investment delivers the greatest operational leverage, which skills become more valuable as the task landscape shifts, how organizational structure should evolve as workflows change, and where human judgment remains irreplaceable and should be developed accordingly.
Organizations that get this right will not just be more efficient; they will be more effective. Their people will spend more time on the work that actually creates value. Their teams will be better designed for the work that actually exists. And their AI investments will compound in ways that purely cost-reduction-oriented deployments never will.
The challenge is not adopting AI. The challenge is rethinking the work itself and building an organization designed for what the work actually is.
About Cortado Group
Cortado Group is a growth execution and go-to-market (GTM) consulting firm that accelerates revenue growth and increases enterprise value. We build the operational infrastructure, strategic frameworks, and AI-enabled capabilities needed to compete and win.
This whitepaper is part of the Cortado Thought Leadership Series, which translates research and strategic frameworks into actionable insights for commercial organizations.