After an initial phase in which AI was understood mainly as a tool for content generation, question answering, and office support, the technology is now moving into a new stage: it is beginning not only to recommend, but to act on behalf of humans across a widening range of tasks.
The rise of “agentic” systems - agents that can receive goals, reason across multiple steps, interact with tools, and execute parts of workflows or transactions - signals an important shift in the digital economy. If the first wave of e-commerce moved retail onto the internet, and the second wave pushed it onto mobile, the emerging phase may move part of product discovery, comparison, and purchasing into AI platforms themselves.
What matters is that the implications do not stop at the shopping interface. They may reshape how companies reach markets, how brands distribute product information, how supply chains receive demand signals, and even how value is divided among platforms, retailers, and suppliers. Reuters has recently reported new moves by Alibaba to launch agentic AI platforms aimed at commercial and enterprise operations, while DHL’s business reports show that AI, personalization, and logistics are increasingly intertwined in reshaping digital buying behaviour. That suggests agentic commerce is not a distant slogan. It is becoming a real competitive frontier.

For small and medium-sized enterprises, especially in markets such as Vietnam, this trend is both an opportunity and a stress test. The opportunity lies in AI’s ability to help smaller firms reach customers more effectively, automate parts of product description and order handling, classify demand, and optimize communications. The stress test lies in the fact that once transactions begin to flow through AI agents, competition will no longer be limited to advertising or marketplace presence. Firms will need to standardize product data, clarify pricing policies, ensure delivery reliability, and maintain brand credibility at a higher level, because AI systems will tend to privilege suppliers with clearer information, stronger trust signals, and more consistent execution.
Another defining shift of the current phase is the widening gap between the speed at which workers are learning and the speed at which institutions are changing. For years, the conventional fear was that employees would resist AI. Yet recent surveys suggest almost the opposite. McKinsey argues that the greatest barrier to scaling AI is not the workforce, but leaders who are not moving quickly enough. Adecco’s research similarly indicates that workers are becoming more proactive in learning AI, want greater control over their skills development, and in many cases are already advancing faster through self-training than through formal company programs.
This is a highly significant signal for management. If the workforce is adapting faster than the system around it, the problem is no longer persuading people to use AI. The problem is redesigning the work environment so that this new capability is not wasted or used without direction. An organization can easily fall into a strange paradox: individuals use AI every day, but the enterprise fails to capture aggregate value because it lacks policy, data standards, approval processes, and business goals tied to AI deployment.
That is why training in the AI era cannot stop at a few introductory tool sessions. Companies need to shift from tool familiarization to judgement building: when AI should be used, how far it should be trusted, how outputs should be checked, how data should be protected, and how human decisions should interact with machine recommendations. In other words, the most important AI skill is not merely knowing how to operate a model. It is knowing how to place the model inside the right governance context.
Precisely because AI is moving from answering to acting, the next phase will inevitably be defined by risk governance. The OECD now maintains its AI Incidents Monitor to track real-world AI incidents and hazards, while also emphasizing that AI use in both public and private sectors requires clear accountability frameworks.
The question is no longer simply whether a model can produce a clever answer. The real question is where limits are set, and how oversight is designed, when an AI agent can access sensitive data, conduct parts of a transaction, filter information, or shape human decisions.
In this context, trust becomes not only an ethical issue but an economic one. An opaque AI system can reduce customer acceptance. An agentic tool that lacks observability and traceability can make companies reluctant to scale. A workplace without clear guidance can push employees toward improvised AI use, creating risks around security, misinformation, and ambiguous accountability. The future of AI, therefore, will not be determined by computing power alone. It will also be determined by the quality of the institutions built around that power.

At the enterprise level, this implies at least three shifts. First, companies need observability into AI behaviour: what the system accessed, what it recommended, what it executed, and who approved it. Second, they need clear boundaries between assistive automation and delegated automation, especially in financial transactions, customer interactions, and sensitive data processing. Third, they need to connect AI governance directly to brand trust, because in the digital economy a single automated failure can damage not only operations but also reputation.
AI’s next turning point is therefore unlikely to belong simply to the organization with the largest model. It is more likely to belong to the organization that can create the greatest trust. Agentic commerce, digital agents, multistep automation, and human-machine collaboration will continue to advance. But only those enterprises that place them within a clear governance framework, respect data, respect customers, and invest seriously in human capability will be able to convert those technologies into durable advantage. In a world where AI increasingly knows how to do things, the central question will be this: who is trustworthy enough to let AI do them on their behalf?
---------------
Main reference sources: Reuters, DHL, McKinsey, Adecco, OECD, and the source manuscript.