亚洲精品1234,久久久久亚洲国产,最新久久免费视频,我要看一级黄,久久久性色精品国产免费观看,中文字幕久久一区二区三区,久草中文网

Global EditionASIA 中文雙語Fran?ais
Opinion
Home / Opinion

Shared safety responsibility

By Cai Cuihong | China Daily Global | Updated: 2026-05-14 22:38
Share
Share - WeChat
SONG CHEN/CHINA DAILY

Sino-US cooperation is crucial for developing the practical guardrails that are imperative in a world in which AI races and risks are increasingly intertwined

As the development of artificial intelligence reshapes industries, intensifies corporate competition and creates shared risks, building guardrails and setting credible rules and public-safety foundations for technological development are global responsibilities shared by China and the United States.

In today’s AI landscape, the US still holds a comparative advantage in multiple fields. That advantage is reflected not only in the continued release of frontier models by companies such as OpenAI, Google, Anthropic and Meta, but also in advanced GPUs, cloud platforms, foundational software, model-evaluation systems, standards bodies, alliance networks and the ability to export a global technology stack. Stanford University’s 2026 AI Index Report shows that in 2025, US institutions produced 59 notable AI models, compared with 35 from China. The US remains the leading force in frontier model innovation, computing ecosystems and platform capacity, and it is increasingly converting these technological edges into rule-shaping power.

China, however, boasts its own advantages instead of playing catch-up. China possesses a vast domestic market, rich industrial applications, a complete manufacturing base, strong engineering and deployment capacity, a fast-growing open-source ecosystem and significant room for digital cooperation with other Global South countries. The Stanford report shows that China ranks first worldwide in AI paper output, citations and granted patents, reflecting deep research accumulation and a sustained innovation base. In recent years, China has advanced its “AI Plus” initiative, accelerating AI adoption in manufacturing, healthcare, education, transportation and urban governance, while expanding access through open-source models and low-cost deployment. China’s strength does not lie in a single frontier breakthrough; it lies in application, implementation, cost efficiency and the ability to translate technology into industrial capacity.

This is why China-US AI relations cannot be reduced to a simple choice between cooperation and confrontation. The US is unlikely to loosen its grip on key technology chains, frontier model ecosystems or international rule-setting power. China, for its part, will not accept being folded passively into risk-assessment, technology-review or compliance frameworks defined unilaterally by the US. Acknowledging competition is not a rejection of cooperation. It is the necessary starting point for any serious cooperative governance effort.

AI needs governance guardrails because its pervasive reach magnifies the costs of competition. Even when traditional technologies have strategic significance, their impact often remains concentrated in specific sectors or dimensions of national power. Once AI is widely deployed, however, it will be embedded in products, services, manufacturing processes, infrastructure and daily life. In this context, without a basic framework of trust between China and the US, trade in AI-enabled products, technological cooperation and industrial ties could all be consumed by security concerns, while the lack of trust would directly impede industrial cooperation and the functioning of global markets.

AI also creates threats that neither country can manage alone. In a recent column, Thomas Friedman argued that the new common threat facing the US and China is not another state, but the risk created by “the malign uses of artificial intelligence”. Agentic AI, automated hacking tools and deepfake systems could give “small, malign actors” destructive capabilities once available only to states. They could launch cyberattacks, spread false information, manipulate public perception and disrupt critical infrastructure at far lower cost. Autonomous weapons, unmanned systems and automated military decision-making could also heighten the risk of miscalculation, loss of control and crisis escalation. These dangers do not respect geopolitical blocs. Both countries are potential targets.

Compounding these risks is a serious governance lag. Model capabilities, agentic behavior, autonomous decision-making, content generation and algorithmic integration are advancing faster than rules can be written. Many risks will not be visible at the moment of invention; they will emerge only through deployment at scale and cross-border diffusion. That is the deeper paradox of China-US AI competition: the harder each side tries to secure itself through technological advantage, the more insecurity both may create if there is a lack of shared guardrails.

Guardrails mean creating mechanisms — risk alerts, incident reporting, interoperable technical standards, crisis communication and boundaries around high-risk uses — to prevent AI competition from sliding into miscalculation, loss of control or systemic risk. Such guardrails should rest on three principles: reciprocity, boundaries and openness.

First, cooperation should be reciprocal. Washington and Beijing can exchange views on risk classification, safety-incident reporting, model-evaluation methods, red-teaming, deepfake detection and content provenance. These are areas with clear public-safety implications and some room for technical consensus. AI governance cannot become a process in which one side writes the standards and the other submit to inspection. China can support interoperable evaluation formats, shared risk terminology and exchanges on testing methods. It cannot accept one country’s unilateral testing results as the equivalent of international compliance standards, nor should safety evaluation become a pretext for intrusive audits of model weights, training data or core engineering parameters.

Second, cooperation should also have boundaries. The purpose of guardrails is to prevent risks from spiraling out of control, not to create new technology barriers. The two countries can discuss high-risk issues such as cyberattacks, AI-enabled fraud, loss of control in autonomous systems, critical-infrastructure protection and AI-related red lines in nuclear command. They can also explore mechanisms for incident reporting, crisis communication and early warning. But AI security should not become an all-purpose justification for securitizing civilian AI, open-source models, cloud services, scientific exchange or talent mobility. If governance is over-securitized, the space for cooperation will shrink and global AI development will become artificially fragmented.

Third, governance can remain open. Global AI governance cannot become a closed club of a few technological powers. The United Nations, the International Telecommunication Union, the International Organization for Standardization, BRICS, the Shanghai Cooperation Organization and some Global South platforms all have roles to play. Countries in the Global South should not be mere rule-takers; they should be participants in, and beneficiaries of, AI governance. The US, China and other major AI-capable actors should work together to help developing countries close the gaps in computing power, data, talent and infrastructure. Through open-source tools, low-cost deployment, training programs, compliance templates and applications that serve basic development needs, they can help ensure that AI truly serves development and creates replicable public goods in areas such as agriculture, healthcare, education, disaster warning and edge AI.

In the age of AI, China and the US cannot return to a world without competition, nor can they accept competition without guardrails. Both countries can build a consensus on the basis of reciprocity, prevent risks from spiraling out of control within clear boundaries, and expand cooperation through open governance, thereby providing a more stable and secure foundation for global AI development.

Cai Cuihong

The author is a professor of international relations at the Institute of International Studies at Fudan University and the deputy director of the Center for Global AI Innovative Governance.

The author contributed this article to China Watch, a think tank powered by China Daily. The views do not necessarily reflect those of China Daily.

Contact the editor at editor@chinawatch.cn.

Most Viewed in 24 Hours
Top
BACK TO THE TOP
English
Copyright 1994 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
License for publishing multimedia online 0108263

Registration Number: 130349
FOLLOW US