Fast-moving developments in how countries trade, invest, and make use of artificial intelligence will impact on all of us over the next nine months。

As we enter a new academic year, in the face of ongoing uncertainty and a fragmented world – what can we expect to see?
Geo-economics, multi-alignment and technology
— Linda Yueh, Adjunct Professor of Economics
This coming year is likely to be one for recalibration in geo-economics and technology, to name but two areas.
One recalibration is around the conflation of geopolitics and economics, and thus geo-economics. U.S. tariffs imposed on its trading partners have led to a recalibration of global value chains, since the countries that run the largest trade surpluses with the U.S. are being subject to the highest tariffs. These countries are often the “connector” countries for global value chains.
For instance, Mexico and Vietnam have large trade surpluses because of their prime positions in international trade and not because of their domestic markets. They benefited from the China+1 strategy undertaken since the first Trump administration when the 2018 tariffs on China prompted multinational corporations to diversify their supply chains. But now that the connector countries are in the spotlight, there may need to be a further recalibration of how to structure trade with the US.
Another recalibration could be termed multi-alignment. Countries could have multiple alignments rather than just be an ally or competitor of another nation. On some issues, countries would seek to cooperate – for instance, on global public goods. On other issues, they would compete – for instance, on technology. On still others, they would draw a line – for instance, on human rights.
“Countries could have multiple alignments rather than just be an ally or competitor of another nation”
This realignment will likely encompass both commercial and national security aims, which would require a clear recalibration of a nation’s priorities.
And then there’s technology. Technologies such as generative AI will be impacted by the shifts in geoeconomics, but also by an evolving understanding of the implications of artificial intelligence. Debates within countries and firms around AI safety and ethics will likely require calibrating the benefits of using a potentially transformative but lesser understood technology such as GenAI. The rapid adoption of ChatGPT when it was released just three years ago has led firms and governments to recalibrate the use cases and they’ll likely continue to evaluate the appropriate governance around such quickly changing technology.
So this year could see significant shifts in trade, investment and adoption of technology. Getting the recalibration right could generate significant benefits for countries and companies.
AI’s next leap – and the missing piece that’s holding it back
— Nicos Savva, Professor of Management Science and Operations and Academic Director of the Data Science and AI Initiative
The pace of progress in artificial intelligence has been nothing short of breathtaking. In a few short weeks, we’ve seen models like ChatGPT-5, Gemini 2.5, Grok 4, and Claude Opus 4.1 push the boundaries of what’s possible — breaking benchmarks in reasoning, summarisation, coding, translation, and creative writing. For knowledge-based workers, these tools are already delivering tangible benefits: saving time, improving quality, and enabling tasks that were previously out of reach.
It’s tempting to conclude that we’re on the cusp of a wholesale replacement of large swathes of knowledge work. And yet, I believe that — for now — one critical capability is missing from even the most advanced AI systems. Without it, AI will remain a powerful tool, but not the transformational force that some predict.
That missing piece is the ability to learn over time from direct experience on a specific task.
“One critical capability is missing. Without it, AI will remain a powerful tool, but not the transformational force that some predict”
To illustrate, consider how a PhD student learns to do research. Over the five (or these days, more often six) years of their doctoral journey, they study formal material — statistics, theory, methods — and large language models (LLMs) are already capable of mastering such content. But the real learning happens in less structured ways: drafting a paper, receiving an advisor’s comments, rewriting in response, presenting at a workshop, fielding difficult questions, revising again, and repeating the cycle dozens of times. This iterative refinement, guided by feedback in a specific context, is how expertise deepens.
Now imagine replacing that one PhD student with a sequence of students: one writes the first draft, another revises it based on feedback, a third presents it at a conference, a fourth incorporates new suggestions — all without access to the thought process or accumulated learning of their predecessors. No matter how intelligent each individual is, the “system” as a whole would never get better at producing research. It would be stuck at the starting line, making the same mistakes again and again.
That’s how today’s LLMs work. They are astonishingly capable within a single interaction, but they don’t truly remember and improve across multiple attempts at the same task. They have no persistent learning curve in the way that a human develops one. The ability to absorb feedback, adapt, and carry forward lessons over time — within the same project, in the same context — is at the heart of most high-value knowledge work.
“LLMs are astonishingly capable within a single interaction, but they have no persistent learning curve, in the way that a human develops one
The leading AI labs are aware of this limitation and are exploring ways to address it — through memory systems, agent frameworks, and continuous fine-tuning. If they succeed, we could see a genuine step change in what AI can do. But until then, we should expect AI to be an invaluable assistant to knowledge workers, rather than their replacement.
In the coming academic year, I predict the most successful business applications of AI will come from organisations that understand both sides of this reality: the unprecedented capabilities of these tools and their current inability to learn like humans do. Those who combine human adaptability with AI’s raw processing power will outpace both the AI-only and human-only approaches.
The AI revolution is real — but its most transformative wave is yet to arrive.