AI workforce transformation is now entering a more difficult phase.

The first phase was mostly about adoption: which tools employees should use, how much productivity they could unlock, and how quickly organizations could integrate AI into existing processes. The next phase is more sensitive. AI is beginning to reshape the structure of work itself: which roles remain, which teams shrink, which skills lose market value, and which employees are asked to support a transformation that may reduce their own place inside the organization.

This is where AI transformation becomes a trust test.

WiseTech Global, the Australian logistics software company, is becoming an early warning case. In February 2026, Reuters reported that WiseTech planned to cut about 2,000 jobs across 40 countries as part of a two-year AI overhaul — nearly one-third of its 7,000-person workforce. The reductions were expected to affect product development and customer service, with some teams facing cuts of up to 50%. 

The company’s own investor materials framed the shift as part of becoming a leaner, AI-led organization. WiseTech described AI as strengthening its moat, supporting internal efficiencies, and enabling a structurally lower cost base. Its 1H26 results presentation also referred to initially “up to 50% headcount reduction” in product and development and customer service as AI becomes embedded into products and internal processes. 

Those signals matter because they show AI being used not only to improve work, but to redesign the workforce architecture around work. That is a different kind of transformation from adding copilots, automating routine tasks, or improving customer support workflows. It is a transition in which AI becomes part of the rationale for role reduction, team redesign, and cost-base restructuring.

The hard question is not whether companies should ever restructure around new technology. They always have. The hard question is whether AI-driven restructuring can be governed in a way that preserves trust, clarity, and managerial legitimacy while the organization is being redesigned.

The WiseTech case shows how quickly that trust can come under pressure.

According to the Guardian, nearly three months after the job cuts were announced, many WiseTech employees still did not know whether they would be affected. Workers described prolonged uncertainty and stress. The Guardian also reported employee concern about being asked to keep working on systems and processes connected to the AI transition while their own roles remained unclear.

This is the point where AI restructuring becomes more than a labor-cost decision. It becomes a governance problem.

A company may have a serious strategic rationale for becoming more AI-led. It may need to redesign teams, reduce duplicated work, restructure product development, or change how customer support is delivered. But if employees experience the process as opaque, prolonged, or extractive, the transformation can lose legitimacy before the operational benefits are fully realized.

The failure mode is not necessarily technical. It is managerial.

The deeper signal: uncertainty becomes organizational risk

Workforce transformation always creates uncertainty. AI makes that uncertainty sharper because the object of change is not only a department, market condition, or business model. It is the perceived value of human work itself.

When employees hear that AI will change the organization, they do not only ask whether a process will improve. They ask whether their expertise still matters. They ask whether their managers know what is happening. They ask whether they are being trained for a future role or quietly prepared for redundancy. They ask whether the systems they help build will be used to justify their own removal.

This is why prolonged ambiguity becomes dangerous.

The Guardian’s reporting suggests that WiseTech employees faced an extended period in which the company’s AI-led restructuring direction was public, but individual role clarity remained unresolved for many. 

That kind of uncertainty is not a soft issue. It affects execution. Employees who do not know whether they are staying may struggle to maintain focus. Managers without clear answers may lose credibility. Teams may become less willing to share knowledge. High performers may leave before decisions are finalized. People may comply on the surface while withdrawing trust underneath.

In AI transformations, this matters even more because adoption often depends on employee participation. Companies need workers to test tools, document processes, transfer knowledge, identify failure points, and adapt workflows. If those same employees believe the transformation is being done to them rather than with them, the organization may weaken the very cooperation it needs to make AI implementation work.

This is the governance tension at the center of the WiseTech case:

Can a company ask employees to help build an AI-led future while leaving them uncertain about whether they have a place in it?

The trust signals are already visible

Several signals suggest that the workforce-trust problem is not hypothetical.

The first is the reported duration of uncertainty. Nearly three months after the announcement, many employees reportedly still lacked clarity on whether they were affected. In a normal restructuring, uncertainty is already difficult. In an AI restructuring, it carries an additional symbolic burden: employees are not only waiting to learn whether a role disappears, but whether their craft has been devalued by a new system. 

The second is the concern that employees may be contributing to systems that could reduce their own roles. The Guardian reported worker anxiety around helping deploy AI while their future remained unclear. That is a distinct transition-governance problem. It raises questions about knowledge transfer, consent, recognition, incentives, and the ethics of employee participation during displacement risk. 

The third is labor-relations escalation. Reuters reported that an Australian union sought urgent talks with WiseTech after the AI-linked job cuts were announced. The Guardian also reported a union-backed petition calling for fair treatment and transparent communication, with more than 300 signatures. 

The fourth is the gap between investor-facing clarity and employee-facing ambiguity. WiseTech’s investor materials present a clear efficiency narrative: AI-led organization, lower cost base, internal efficiencies, and headcount reductions in specific functions. Employees, according to the Guardian’s reporting, experienced a less clear reality: prolonged waiting, uncertainty, and stress. 

That gap is dangerous. When investors receive a coherent transformation story and employees receive uncertainty, trust can deteriorate quickly. The organization may still execute the restructuring, but it may do so with a weakened psychological contract.

Stay informed on AI education policy

Policy briefs, event invitations, and analysis — delivered weekly.

Subscribe

Adoption governance is not the same as transition governance

The WiseTech case should be read alongside other recent enterprise AI signals, but not collapsed into them.

Amazon and Accenture raise one set of questions. Amazon’s AI hiring tools and Accenture’s large-scale Copilot rollout show AI entering managerial workflows: hiring, planning, productivity culture, and leadership expectations. Those cases raise questions about AI adoption governance — how to supervise AI when it becomes part of decision support, performance norms, and work routines.

WiseTech raises a different question.

It is not only about how AI enters work. It is about what happens when AI changes who remains inside the work.

That distinction matters. AI adoption governance asks how to use AI responsibly inside existing workflows. AI workforce transition governance asks how to manage trust, communication, role redesign, and employee dignity when AI changes the workforce structure itself.

The first is about governing tools as they enter management systems. The second is about governing the human transition when those tools become part of restructuring logic.

This is where many companies may be underprepared. They may have AI policies, procurement reviews, security checks, and productivity pilots. They may not have a serious governance model for the workforce transition AI creates.

What AI workforce transition governance requires

The WiseTech case points to a capability that many organizations will need but few have fully built: AI workforce transition governance.

This is not traditional change management with an AI label. It is a more specific discipline because AI affects the perceived value of work, the legitimacy of expertise, and the timeline of role redesign.

It begins with role-impact clarity. Employees need to understand which roles are affected, when decisions will be made, and what operational logic is driving the change. Absolute certainty is not always possible early in a transformation, but prolonged ambiguity without a credible timeline creates avoidable harm.

It also requires communication sequencing. Public announcements, investor materials, manager briefings, and employee communications must be aligned. If the market hears a confident AI efficiency story before employees understand what it means for them, the organization creates a credibility gap. That gap can become a trust crisis.

Manager enablement is another critical layer. Managers are often the first line of emotional and operational response, but they may not have enough information, authority, or support to guide their teams. In an AI restructuring, this is especially damaging because managers must maintain performance while employees question whether their work still has a future.

Internal mobility and reskilling must also be concrete. General promises of “upskilling” are not enough when employees face possible redundancy. Transition pathways need to be realistic: which roles are available, what skills are required, how selection will work, what support is provided, and what happens if redeployment is not possible.

The most sensitive issue is employee participation ethics. If employees are asked to train, improve, document, or deploy systems that may reduce their own roles, the organization should treat that as a special governance category. It may require clearer disclosure, additional recognition, retention incentives, transition protections, or explicit agreements about how employee knowledge will be used.

Finally, psychological safety and dignity must be treated as part of execution, not as a communications afterthought. Uncertainty, silence, and vague reassurances can become organizational harm. A humane transition is not only morally preferable; it is more likely to preserve cooperation, knowledge sharing, and operational continuity.

What not to conclude from WiseTech

The WiseTech case should not be overread.

It does not prove that AI restructuring is always wrong. It does not prove that companies should avoid workforce redesign. It does not prove that AI cannot improve productivity, reduce costs, or strengthen competitiveness.

Companies do have to adapt. Some roles will change. Some teams will shrink. Some work will be automated or reorganized. Pretending otherwise would be unserious.

But the case does show that AI restructuring creates a distinct governance problem. When AI becomes part of the rationale for job cuts, the transition cannot be handled as a technical rollout followed by a delayed HR process. The workforce implications are not downstream. They are central to the transformation itself.

The serious question is not whether companies can use AI to become more efficient. They can.

The serious question is whether they can do so without breaking the trust required to make the organization function.

The new standard: transition legitimacy

The next phase of AI workforce transformation will not be judged only by productivity gains or cost reductions. It will also be judged by transition legitimacy.

That means employees understand the logic of change, even when the news is difficult. Managers are equipped to communicate with credibility. Role impacts are mapped before uncertainty spreads across the organization. Internal mobility is real rather than symbolic. Employees are not asked to contribute to their own displacement without safeguards. The company preserves dignity even when roles are reduced.

WiseTech is becoming a warning signal because it shows what can happen when the efficiency story moves faster than the transition system.

The broader lesson is not limited to one company or one country. Any enterprise pursuing AI-led restructuring will face the same governance test. The more AI becomes capable of changing the structure of work, the more leaders will need to govern the human transition with clarity, speed, and fairness.

AI workforce transformation does not fail only when the technology fails.

It can also fail when the people asked to carry the transformation no longer trust the process.