6 min read

The People Question Inside Every AI Strategy.

The People Question Inside Every AI Strategy.
The People Question Inside Every AI Strategy.
11:13

In our work with leaders across federal, commercial, and higher education, we keep noticing the same pattern. The AI conversation usually starts with governance, data, risk, and value. Then, often a few meetings in, someone asks a quieter question:

What is our plan for the people whose work is going to change?

It is the right question, and it tends to come a little later than it should. This piece is about what we are seeing when leaders bring it forward, and why getting to it earlier seems to make everything else work better.

In brief

Most AI roadmaps treat the workforce question last. Governance, data, risk, and value get worked out first, and the people whose work is going to change come up later, often after the architecture is already set.

The evidence is still developing, but the direction is clear. Work is shifting. Roles are being redesigned. New capabilities will matter. Some of the most interesting research is mixed enough that confident predictions are premature, but quiet enough that planning is overdue.

The organizations that adapt well bring the workforce question forward. They plan early for reskilling, redeployment, oversight, and transition support, and they treat that planning as a condition for AI value, not a tax on it.

What the data is showing.

The research right now is more mixed than the headlines suggest, and that nuance matters. A 2025 NBER working paper found limited near-term effects from AI chatbot adoption on earnings and recorded hours, while still showing changes in tasks and occupational movement (NBER, 2025). The labor market may look calm on the surface while work shifts underneath.

Other evidence shows a more concentrated signal. Stanford researchers found that early-career workers in the most AI-exposed occupations experienced a 13 percent relative decline in employment, while more experienced workers in the same fields were stable or continued to grow (Stanford Digital Economy Lab, 2025). The authors have continued to refine the analysis, and the size of the effect attributable to AI alone remains an open empirical question. Even with that caveat, the finding has been on our minds. If entry-level work thins out, organizations do not just lose jobs. They lose the pipeline where people learn judgment, context, and craft.

So the practical leadership question is less about whether AI is changing the workforce, and more about what each organization wants to do about the change it is already designing. Answering it earlier tends to make the trust work easier, not harder.

AI is the latest technology shift, not the first.

The printing press, the power loom, the assembly line, the internet, and the cloud each changed how work was done. Some roles thinned out. Others emerged that no one had a name for yet.

The same pattern is showing up with AI. Work is taking shape around AI orchestration, governance, risk, oversight, workflow design, quality review, and human judgment. The World Economic Forum's Future of Jobs Report 2025 projects 170 million jobs created and 92 million displaced by 2030 across major labor market trends, with technology and AI as significant drivers. Demand is rising for roles that combine domain expertise with AI literacy, including AI system architects, ethics and governance specialists, and human-AI collaboration designers.

History does not suggest that technology change is painless. It suggests that organizations that adapt well redesign the work system and the people system together.

Take Ford. The moving assembly line cut the time to build a Model T from 12.5 hours to 93 minutes, but pushed turnover at the Highland Park plant to 370 percent in 1913. The Five Dollar Day was a workforce response to that operating model problem, and within a year turnover fell to 16 percent and productivity rose between 40 and 70 percent. Ford had to change the people model because the operating model had changed.

Toyota took a different path. Through jidoka, often described as automation with a human touch, Toyota built human oversight into the production process. Workers were expected to notice problems, stop the line, and improve the system.

A more recent example comes from telecommunications. In the 1980s, AT&T navigated digital technology, divestiture, deregulation, and new competition at the same time. Layoffs and restructuring were part of the response, but not all of it. Through contracts with the Communications Workers of America, AT&T and the Bell companies agreed to direct $36 million to retraining under the 1983 agreement, and the 1986 contract established the Alliance for Employee Growth and Development, a joint labor-management nonprofit that gave CWA-represented employees access to career counseling, pre-paid tuition, basic skills training, and job search support. The transition was not painless, but the workforce question was treated as part of the transformation, not a footnote to it.

Different eras. Different industries. Same lesson. The technology matters, but the advantage comes from how intentionally organizations adapt the work around it.

Tasks will disappear. Roles will shrink. New roles will emerge. But those new roles will not automatically help the people most affected by the change. That is why workforce planning has to sit inside the AI strategy, not behind it.

Where workforce planning fits in the AI journey.

At RightSeat, we describe the AI journey in three stages. Adoption gets the tools into people’s hands. Fluency helps teams use them with judgment. Adaptation redesigns work, roles, governance, and accountability around what AI now makes possible.

Adoption is not the finish line. Adaptation is.

Workforce planning lives in the Adaptation stage, and that placement matters. When the people question gets asked early, alongside the architecture and governance work, it tends to shape better decisions across the whole roadmap. When it gets asked after roles have already been redrawn, the options shrink. Most leaders we work with want the bigger option set.

What employees are asking.

In the workshops and listening sessions we run, the questions from employees are remarkably consistent. Which of my tasks are changing? Which roles are being redesigned? What new skills will matter? What new roles are coming? What is going away? What are the rules for when I can and can't use AI?

We have noticed that organizations that can answer those questions specifically tend to move faster on AI, not slower. Specificity builds trust, and trust is what lets people lean in instead of brace.

What the research says about retraining that works.

There is good news and a wrinkle in the retraining data. A 2025 NBER working paper found that AI-exposed workers see real earnings returns from retraining, around $1,470 per quarter on average. The wrinkle is that the path matters. Workers who pursued more general training outperformed those who retrained specifically for AI-intensive occupations by 29 percent (Hyman, Lahey, Ni, and Pilossoph, NBER, 2025).

That finding has changed how we think about workforce programs. “Learn AI” is a useful slogan, but it is not a strategy. Building broader, transferable capability often produces better outcomes for people than chasing AI-specific roles. The most effective programs we see are anchored in the work people do, not in generic tool training.

The talent already on the payroll.

One of the more useful exercises we run with leaders looks at redeployment before displacement. AI reduces the need for some tasks. It often increases the need for others, including judgment, customer context, process knowledge, quality review, and change management. Many of those strengths already sit inside the organization.

Hiring externally for capabilities that already exist on the payroll is expensive. It is also a quiet signal to everyone watching how the organization treats its people through a major change. The leaders furthest along on this tend to look inside first.

Designing graceful transitions when roles do change.

Some roles will change or end. When they do, the menu of options is wider than people sometimes assume. Paid transition time, career coaching, credential support, internal references, alumni hiring networks, and warm handoffs to public workforce resources all show up in programs that work. In the U.S., the Department of Labor’s Rapid Response services connect affected workers with career counseling, job search support, reskilling, skills upgrading, and training (U.S. Department of Labor).

The way an organization handles people on the way out tends to shape how the people staying behind feel about the organization on the way forward. We have seen that show up in engagement data, in retention, and in the speed of the next change initiative.

Why this shows up in performance data.

This is the part that surprises some of the leaders we work with. The workforce question is not only a values question. It is a performance question. When transitions feel opaque or rushed, trust drops. Voluntary attrition rises among the people the organization most wanted to keep. Change initiatives slow down because frontline teams stop engaging. AI investments quietly underperform because adoption stays shallow.

Workforce planning is not a tax on AI ROI. In our experience, it is one of the conditions for it.

The metrics leaders are starting to track.

Most AI scorecards still measure productivity, cost savings, and headcount. The leaders furthest along on Adaptation are starting to add a few more, alongside the standard ones:

  • How many people were reskilled before any role change
  • How many moved into new roles inside the organization
  • How many found comparable work after transition
  • How wages changed for transitioned workers
  • How trust and engagement moved through the change

These are not soft metrics. They predict whether the next phase of AI investment lands or stalls.

The conversation we are finding most useful.

AI is rarely a clean choice between productivity and people. The more interesting conversation, and the one we keep coming back to with leaders, is how to use AI to create more valuable work, more capable organizations, and more opportunity for the people doing the work.

If your AI roadmap already has a people plan inside it, you are ahead of where most organizations are right now. If it does not yet, that is the conversation worth having before the next phase of investment.

What would change in your AI strategy if workforce planning sat next to the architecture work, instead of after it?

RightSeat helps leaders move from AI Adoption to Fluency to Adaptation. We are human co-pilots for your AI journey.

The People Question Inside Every AI Strategy.

The People Question Inside Every AI Strategy.

In our work with leaders across federal, commercial, and higher education, we keep noticing the same pattern. The AI conversation usually starts with...

Read More
As Federal AI Scales, Cybersecurity Has to Keep Up

As Federal AI Scales, Cybersecurity Has to Keep Up

Federal agencies are already adopting AI. The question now is whether cybersecurity, governance, and oversight are moving at the same pace.

Read More
50,000 Federal Employees Are Moving to Google Workspace. Now What?

50,000 Federal Employees Are Moving to Google Workspace. Now What?

The U.S. Department of Transportation is the first cabinet-level agency moving its workforce to Google Workspace with Gemini.1 Over 50,000 DOT...

Read More