Artificial intelligence is still often framed as a race of models, tools and technical breakthroughs. But the more consequential shift is now happening elsewhere. Across enterprise, public education and multilateral policy, the central question is no longer simply whether AI can be adopted. It is whether institutions can redesign themselves quickly enough to use it well.

That shift is becoming visible across very different settings. BlackRock is reportedly rolling out RockAI to let employees, including nontechnical staff, build specialized AI agents. In the UK, the government is inviting companies to develop safe, personalized AI tutoring tools for disadvantaged pupils, with teacher supervision and national benchmarks built into the process. UNESCO, meanwhile, has launched a regional observatory on AI in education for Latin America and the Caribbean to support policy, evidence-building and teacher capacity. These are very different institutional contexts, but they are converging on the same challenge: what conditions make AI deployment usable, accountable and durable?

Enterprise AI is becoming an operating model question

In enterprise, the key signal is no longer that large firms are experimenting with AI. It is that some are trying to embed it into operating infrastructure. BlackRock matters here not because it has adopted AI, but because its reported approach suggests a move beyond scattered productivity tools toward AI agents embedded across functions. That shifts the real question from what the model can do to who can deploy it, how outputs are validated, where responsibility sits, and what governance makes wider use trustworthy.

A second enterprise signal points in the same direction. Reuters reports that Merck is partnering with Google Cloud in a deal worth up to $1 billion over several years to expand AI across drug research, regulatory work, manufacturing and commercial operations. Merck’s chief information and digital officer said, “This isn’t a pilot,” adding that the company is already submitting reimbursement dossiers with the new capability and scaling it globally. That matters because it suggests that in at least some sectors, the AI conversation is moving from experimentation to operational integration under real accountability constraints.

Rising use does not automatically mean real transformation

Workforce data reinforces that this transition is real, while also showing why it remains incomplete. Gallup reported in April 2026 that half of employed American adults say they use AI in their role at least a few times a year, and 41% say their organization has integrated AI tools to improve organizational practices. At the same time, Gallup found that evidence of AI fundamentally changing how work gets done across organizations remains limited, and that only about one in 10 employees in AI-adopting organizations strongly agree that AI has transformed work across their organization. In other words, usage is rising faster than institutional redesign.

In education, AI is becoming a public implementation challenge

Stay informed on AI education policy

Policy briefs, event invitations, and analysis — delivered weekly.

Subscribe

The same logic is now appearing in public education. The UK government’s tutoring initiative is notable not because it celebrates AI in the abstract, but because it treats AI as a public implementation problem. The tools are expected to align with the national curriculum, support disadvantaged pupils, be tested in schools under teacher supervision, and, if successful, become available nationally from 2027. Up to eight organizations are expected to join the initial pioneer group.

The policy language around the initiative is especially revealing. Education Minister Olivia Bailey said that “getting this right matters just as much as moving quickly” and that every tool must be “built with teachers, tested rigorously, and held to the highest safety standards.” That is not the language of technological evangelism. It is the language of institutional design: co-development, testing, legitimacy and deployment thresholds. The UK case is therefore more than an edtech announcement. It is a live example of a government specifying the conditions under which AI can enter classrooms responsibly.

UNESCO’s observatory signals a need for coordination, not just experimentation

UNESCO’s new observatory adds a third signal, and a different institutional layer. UNESCO says the observatory is the first multi-stakeholder platform of its kind in the region and “not a space for passive observation, but rather a coordinated action” that will generate contextualized evidence, guide public policy, strengthen teacher training and decision-making, and promote innovations validated in classrooms under ethical frameworks. This matters because it reflects a growing recognition that once many systems begin moving at once, isolated pilots and fragmented experimentation are no longer enough. Institutions also need mechanisms for comparative learning, coordination and evidence accumulation.

UNESCO’s framing is notable for another reason: it explicitly connects education to the future of work. Its launch note says the expansion of AI in education poses decisive challenges for the future of work in the region and argues that labor reskilling cannot rest solely on education systems. That is an important signal for anyone trying to bridge AI in education and workforce transformation rather than treating them as separate policy silos. It suggests that the next phase of adoption will require institutional responses that are intersectoral, not merely pedagogical or technical.

A broader pattern is emerging across sectors

A recent World Economic Forum article written by the founder of the Observatory, Qiqing He, helps place these developments in a broader frame. That article argued that successful AI implementation depends less on technical access alone than on readiness, local human capacity and policy alignment, and that the next challenge is not adoption itself but readiness. Read alongside BlackRock, Merck, the UK initiative and UNESCO’s observatory, that argument now looks less like a sector-specific observation and more like a cross-sector pattern. Enterprises face readiness bottlenecks in workflow integration and managerial design. Governments face readiness bottlenecks in procurement, evaluation and public legitimacy. Multilateral bodies face readiness bottlenecks in coordination and evidence infrastructure. In each case, the constraint is not simply model capability. It is institutional preparedness.

Four conditions will shape the next phase of AI adoption

If that diagnosis is right, then the next phase of AI adoption will depend on four conditions.

The first is capacity. People need time, skills and support to use AI well, not just access to tools. Gallup’s data suggests that rising usage has not yet translated into deep organizational transformation, while UNESCO’s education work continues to emphasize that capacity-building must accompany technological change.

The second is governance. As institutions move from pilots to wider deployment, they need clear rules around responsibility, oversight, testing and safety. The UK initiative’s emphasis on teacher collaboration, rigorous testing, high safety standards and structured evaluation shows that public-sector actors are increasingly treating governance as central rather than secondary.

The third is legitimacy. AI systems are more likely to endure when the people affected by them understand their purpose and trust the conditions of their use. That matters in classrooms, where teachers and students live with the consequences, and in workplaces, where staff need confidence that AI is augmenting work responsibly rather than reshaping roles arbitrarily. None of the cases above has fully solved this challenge, but all of them point toward it.

The fourth is feedback loops. Institutions need evidence not only that AI can produce outputs, but that it improves outcomes under real conditions. The UK’s pilot-and-benchmark structure, UNESCO’s observatory model and Merck’s move from live use toward global scaling all suggest that learning systems around AI adoption are becoming as important as the tools themselves.

The real contest is no longer only about model capability

The temptation in AI debate is always to focus on the newest model or the next capability jump. But that frame is becoming less useful. The harder and more consequential question is whether institutions can redesign workflows, governance, trust and evidence practices quickly enough to keep pace. The future of AI adoption will not be determined only by what models can do. It will be shaped by whether enterprises, governments and education systems can build the institutional conditions that make those capabilities usable, accountable and broadly beneficial.