Just over a year into the era of agentic AI, the report cards are in. Assessments from strategy firms, security researchers, and multilateral bodies agree on the headline that AI is shifting from chat to action. The question for democracies is no longer whether to permit agents, but how to deploy them safely and at scale. And whether responsible autonomy can become a routine feature of democratic governance.
The United States leads on AI capability, while China leads on diffusion. Within months of DeepSeek shaking up the market with reasoning, systems emerged that act on a user’s behalf. Open agent frameworks and cross border alliances followed. Beijing-based Zhipu AI’s AutoGLM Rumination is more than another model.
It is a case study in how autonomy and geopolitics now intersect. AutoGLM’s design an autonomous reasoning engine distributed freely with an “international alliance” linking Belt and Road partners. It demonstrates that agentic AI is not just a technological export but a strategic instrument. Its spread highlights the need for interoperable safeguards so that agent workflows remain secure as models, tools, and marketplaces traverse borders.
If democracies fail to embed transparent, auditable agents within their own digital infrastructure, they risk importing opaque ones shaped elsewhere. That is where public infrastructure must step forward. The United Nations defines Digital Public Infrastructure (DPI) as the backbone of modern societies: secure, interoperable rails for identity, payments, and data. Now picture that stack with autonomy built in.
A permitting agent that verifies eligibility, drafts applications, and books inspections could compress months into days. When designed well, such systems turn governments into platforms that learn, enabling citizens to meet a state that anticipates rather than reacts. When designed poorly, they risk becoming conduits for foreign algorithms making domestic administrative decisions.
Unlike traditional chatbots or predictive models, agentic AI possesses true agency. It can reason, plan, act, and learn with minimal human prompting a digital civil servant capable of cross-checking welfare claims, analyzing policy outcomes, or coordinating disaster response in real time. Properly directed, these systems could transform bureaucratic bottlenecks into responsive governance. Left unguarded, they could also erode trust, privacy, and accountability, the pillars of democracy itself.
Trust is also why the Global South must be part of this conversation. India’s apex public policy think tank, NITI Aayog has laid groundwork with Responsible AI for All and a Roadmap on AI for Inclusive Societal Development proposing mission scale programs for 490 million informal workers. Work on AI governance has ramped up across multiple forums, with the OECD, European Union, United Nations, and African Union each advancing frameworks cantered on transparency, trustworthiness, accountability, and other responsible-AI norms.
The Stanford AI Index 2025 notes growing global optimism about AI’s social benefits and fast gains in agent-like capabilities, but it also cautions that optimism is not the same as safety. This reflects in the fact that deep regional divides remain when it comes trust in AI systems.
In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI products and services as more beneficial than harmful. In contrast, optimism remains far lower in places like Canada (40%), the United States (39%), and the Netherlands (36%). The U.S. federal agencies introduced 59 AI-related regulations in 2024, more than double the number in 2023. In 2024, nearly 700 AI-related bills were introduced across 45 states, up from 191 in 2023.
Rather than legislate from fear or outsource experimentation, democratic governments should build ‘Digital Public Intelligence’—the next DPI layer that puts AI agents to work inside government while keeping them in check with audits, a right to appeal, human override, and opt-out. To achieve this, democracies can operationalize safe and responsible autonomy with a four-step plan aligning research, UN governance, and national policy:
- Establish public sandboxes inside existing DPI: Pilot agents across core public and service delivery systems to evaluate how they assist decision-making, automate tasks, and interact with citizens safely. Document what the agent saw, used, decided, and why.
- Set minimum safety baselines adoption: Require sandboxing, human-in-the-loop for high-stakes cases, auditable logs, and clear escalation paths when agents conflict with policy or human judgment.
- Publish the playbook: Open-source interface and logging standards so that successful patterns travel across allied nations, enabling interoperable democratic AI rather than fragmented silos.
- Measure legitimacy as well as latency: Track user understanding, appeal rates, error recovery, and public trust alongside speed and cost.
Together, these steps form a democratic alternative to black-box autonomy, a transparent operating manual any partner nation can adopt without surrendering sovereignty. If done right, agentic AI can become a shared civic utility, not a proprietary weapon.
Democracies already have a structural advantage. They coordinate diversity through rules. Agentic AI can extend that coordination into the digital realm, but only if people can see how decisions are made and who is accountable when things go wrong. John Dewey called democracy “a mode of associated living”. Digital Public Intelligence helps preserve that association when decisions are made at machine speed.
Agentic AI is coming either way. As policymakers convene, the choice is stark. Agentic systems like AutoGLM will not wait for slow-moving democracies to decide. The choice is not between using agents or banning them. It is between importing opaque systems on foreign terms and building open, auditable agents within our own public infrastructure.
Democracies hold the advantage if they use it to transform DPI into Digital Public Intelligence: autonomy that is powerful, interoperable, and answerable to the people it serves.



