AI Governance and Global Stability: Why U.S. Leadership Matters

AI Governance and Global Stability: Why U.S. Leadership Matters

Authors

  • Rajat Bhandari Harrisburg University of Science and Technology
  • Shraddha Bhandari Chief Financial Officer Bhageshwor Sugar and Chemical Industries Kathmandu https://orcid.org/0009-0008-6058-4219

DOI:

https://doi.org/10.15575/jcspi.v3i1.1306

Keywords:

Artificial Intelligence (AI), Agency theory, U.S. Leadership, Authoritarianism and AI, Ethical AI Development, Global Governance

Abstract

The global landscape is undergoing a profound transformation driven by artificial intelligence (AI), a technology that has the potential to reshape global power dynamics, economies, and societies. The United States (U.S.) has historically played a central role in guiding technological advancements, offering leadership that has prioritized ethical governance and global stability. Drawing a parallel to the U.S. leadership during the development of atomic weapons, this study emphasizes the necessity for the U.S. to take a proactive and responsible role in the governance of AI. Without U.S. leadership, the proliferation of AI risks falling into the hands of authoritarian regimes, such as China and Russia, whose use of AI for surveillance, censorship, disinformation, and military purposes could destabilize international norms and threaten democratic values. The study uses agency theory to argue that the global community must rely on the U.S. as a responsible agent to ensure AI technologies are used ethically and for the collective benefit of humanity. The paper also incorporates social comparison theory, technological determinism, and international relations realism to further illustrate the strategic and moral imperative of U.S. leadership in AI governance. By examining the historical context of U.S. leadership in managing disruptive technologies, this study highlights the urgent need for the U.S. to establish global AI governance frameworks that prioritize human rights, equity, and democratic values, countering the risks posed by authoritarian misuse of AI. Overall, the study employs a systematic meta-analysis, utilizing agency theory and complementary frameworks such as social comparison theory, technological determinism, and realism to analyze the U.S.'s role in global AI governance, drawing from peer-reviewed literature sourced from databases like Google Scholar, PubMed, and Web of Science, published between 2010 and 2025. The analysis reveals that U.S. leadership in AI prioritizes ethical development, transparency, and international collaboration, contrasting sharply with China and Russia’s authoritarian strategies focused on surveillance, militarization, and disinformation, underscoring the urgent need for U.S.-led global norms to ensure AI aligns with democratic values and fosters global stability.

Author Biography

Shraddha Bhandari, Chief Financial Officer Bhageshwor Sugar and Chemical Industries Kathmandu

Shraddha Bhandari is a seasoned finance professional with a robust career spanning over two decades, during which I have accumulated extensive experience in teaching, financial management, advisory roles, and leadership positions across various organizations. Her expertise lies in strategic financial planning, risk management, compliance, and operational excellence, supported by a solid academic foundation in finance and sociology.

References

Adler, E. (2013). Constructivism in international relations: Sources, contributions, and debates. Handbook of International Relations, 2, 112-144. https://www.torrossa.com/en/resources/an/5019593#page=137

Alessio, D., & Renfro, W. (2022). Building empires litorally in the South China Sea: Artificial islands and contesting definitions of imperialism. International Politics, 59(4), 687-706. https://doi.org/10.1057/s41311-021-00328-x

Allison, G. T., & Beschel, R. P. (1992). Can the U.S. promote democracy?. Political Science Quarterly, 107(1), 81-98. https://doi.org/10.2307/2152135

Anatolyevich, K. O. (2022). Artificial Intelligence as a Strategic Asset in Technological Race For Global Leadership. International Journal of Professional Science, (8), 32-41.

Ashmore, W. C. (2009). Impact of alleged Russian cyber attacks. Baltic Security & Defence Review, 11(1), 4-40. https://apps.dtic.mil/sti/html/tr/ADA504991/

Azpuru, D., Finkel, S. E., Pérez-Liñán, A., & Seligson, M. A. (2008). Trends in democracy assistance: what has the U.S. been doing?. Journal of Democracy, 19(2), 150-159. https://www.journalofdemocracy.org/articles/trends-in-democracy-assistance-what-has-the-united-states-been-doing/

Bales, C. F., & Duke, R. D. (2008). Containing climate change-An opportunity for US leadership. Foreign Aff., 87, 78. http://www.newworldcapital.net/wp-content/uploads/2014/02/Containing_Climate_Change.pdf

Bhandari, R. (2025a). Artificial Intelligence in Healthcare Analytics. EDPACS, 1-13. https://doi.org/10.1080/07366981.2025.2506859

Bhandari, R. (2025b). AI and Cybersecurity: Opportunities, challenges, and governance. EDPACS, 1-9. https://doi.org/10.1080/07366981.2025.2544363

Bhandari, R., & Bhandari, S. (2025). Artificial intelligence: understanding deepfakes. EDPACS, 1–11. https://doi.org/10.1080/07366981.2025.2484863

Carranza, M. E. (2006). Can the NPT survive? The theory and practice of US nuclear non-proliferation policy after September 11. Contemporary Security Policy, 27(3), 489-525. https://doi.org/10.1080/13523260601060537

Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence and the ‘good society’: the US, EU, and UK approach. Science and Engineering Ethics, 24, 505-528. https://doi.org/10.1007/s11948-017-9901-7

Dafoe, A. (2015). On technological determinism: A typology, scope conditions, and a mechanism. Science, Technology, & Human Values, 40(6), 1047-1076. https://doi.org/10.1177/0162243915579283

Davis, A. L. (2021). Artificial Intelligence and the Fight Against International Terrorism. American Intelligence Journal, 38(2), 63-73. https://www.jstor.org/stable/27168700

Dearing, J. W., & Cox, J. G. (2018). Diffusion of innovations theory, principles, and practice. Health Affairs, 37(2), 183-190. https://doi.org/10.1377/hlthaff.2017.1104

Diamond, L. J., & Plattner, M. F. (1990). Why the” Journal of Democracy”. Journal of Democracy, 1(1), 3-5. https://dx.doi.org/10.1353/jod.1990.a225680

Donnelly, J. (2000). Realism and International Relations. Cambridge University Press. www.cambridge.org/0521592291

Feldstein, S. (2019). The road to digital unfreedom: How artificial intelligence is reshaping repression. Journal of Democracy, 30(1), 40-52. https://dx.doi.org/10.1353/jod.2019.0003

Fravel, M. T. (2011). China’s strategy in the South China Sea. Contemporary Southeast Asia, 292-319. https://www.jstor.org/stable/41446232

Fricke, B. (2020). Artificial intelligence, 5G and the future balance of power. Konrad-Adenauer-Stiftung. https://d-nb.info/1228268509/34

Gat, A. (2005). The democratic peace theory reframed: The impact of modernity. World Politics, 58(1), 73-100. https://doi.org/10.1353/wp.2006.0017

Güth, W. (1988). Game theory and the nuclear arms race-The strategic position of Western Europe. European Journal of Political Economy, 4(2), 245-261. https://doi.org/10.1016/0176-2680(88)90003-1

Hopf, T. (1998). The promise of constructivism in international relations theory. International security, 23(1), 171-200. https://doi.org/10.1162/isec.23.1.171

Horowitz, M. C. (2018). Artificial intelligence, international competition, and the balance of power. 2018, 22. http://hdl.handle.net/2152/65638

Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs, and ownership structure. Journal of Financial Economics, 3(4), 305–360. https://www.academia.edu/download/37786595/jensen-meckling.pdf

Johnson, J. (2021). Artificial intelligence and the future of warfare: The USA, China, and strategic stability. Manchester University Press.

Jos, P. H. (2006). Social contract theory: Implications for professional ethics. The American Review of Public Administration, 36(2), 139-155. https://doi.org/10.1177/0275074005282860

Kreps, S., & Kriner, D. (2023). How AI threatens democracy. Journal of Democracy, 34(4), 122-131. https://dx.doi.org/10.1353/jod.2023.a907693

Kumar, S., Verma, A. K., & Mirza, A. (2024). Digital Transformation, Artificial Intelligence and Society. https://doi.org/10.1007/978-981-97-5656-8

Li-Kuehne, M., Mwaungulu, E., & Subedi, M. (2024). Community social capital and accounting conservatism. Journal of Forensic Accounting Research, 1-32. https://doi.org/10.2308/JFAR-2023-007

Mokry, S., & Gurol, J. (2024). Competing ambitions regarding the global governance of artificial intelligence: China, the US, and the EU. Global Policy, 15(5), 955-968. https://doi.org/10.1111/1758-5899.13444

Mwaungulu, E., Li-Kuehne, M., & Subedi, M. (2023). Corporate Governance, Internal Control and Leverage: Are We There Yet?. EDPACS, 68(6), 1-24. https://doi.org/10.1080/07366981.2023.2296714

Nexon, D. H. (2009). The balance of power in the balance. World Politics, 61(2), 330-359. https://doi.org/10.1017/S0043887109000124

Owen, G. (1982). Game Theory. Academic Press: Cambridge, MA, USA, 1982.

Parmar, I. (2013). The ‘knowledge politics’ of democratic peace theory. International Politics, 50(2), 231-256. https://doi.org/10.1057/ip.2013.4

Pashentsev, E. (2023). Destabilization of Unstable Dynamic Social Equilibriums and the Malicious Use of Artificial Intelligence in High-Tech Strategic Psychological Warfare. In The Palgrave Handbook of Malicious Use of AI and Psychological Security (pp. 231-250). Cham: Springer International Publishing. https://doi.org/10.1007/9

Paul, T. V., Wirtz, J. J., & Fortmann, M. (2004). Balance of power: theory and practice in the 21st century. Stanford University Press.

Plichta, M., & Rossiter, A. (2024). A one-way attack drone revolution? Affordable mass precision in modern conflict. Journal of Strategic Studies, 47(6-7), 1001-1031. https://doi.org/10.1080/01402390.2024.2385843

Qiang, X. (2019). The road to digital unfreedom: President Xi’s surveillance state. Journal of Democracy, 30(1), 53-67. https://dx.doi.org/10.1353/jod.2019.0004

Quade, M. J., Greenbaum, R. L., & Mawritz, M. B. (2019). “If only my coworker was more ethical”: When ethical and performance comparisons lead to negative emotions, social undermining, and ostracism. Journal of Business Ethics, 159(2), 567-586. https://doi.org/10.1007/s10551-018-3841-2

Resick, C. J., Martin, G. S., Keating, M. A., Dickson, M. W., Kwan, H. K., & Peng, C. (2011). What ethical leadership means to me: Asian, American, and European perspectives. Journal of Business Ethics, 101, 435-457. https://doi.org/10.1007/s10551-010-0730-8

Ritchie, D. G. (1891). Contributions to the history of the social contract theory. Political Science Quarterly, 6(4), 656-676. https://doi.org/10.2307/2139203

Rizzo, C., Bagna, G., & Tuček, D. (2024). Do managers trust AI? An exploratory research based on social comparison theory. Management Decision. https://doi.org/10.1108/MD-10-2023-1971

Roff, H. M. (2019). The frame problem: The AI “arms race” isn’t one. Bulletin of the Atomic Scientists, 75(3), 95-98. https://doi.org/10.1080/00963402.2019.1604836

Rubinstein, A. (1991). Comments on the interpretation of game theory. Econometrica: Journal of the Econometric Society, 909-924. https://doi.org/10.2307/2938166

Russo, E. L. (2024). Achieving Integrated Deterrence Through Conventional and Nuclear Integration. https://bearworks.missouristate.edu/theses/4020

Samoilenko, S. A., & Suvorova, I. (2023). Artificial intelligence and deepfakes in strategic deception campaigns: The US and Russian experiences. In The Palgrave handbook of malicious use of AI and psychological security (pp. 507-529). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-22552-9_19

Scott, J. M., & Steele, C. A. (2011). Sponsoring democracy: The U.S. and democracy aid to the developing world, 1988–2001. International Studies Quarterly, 55(1), 47-69. https://doi.org/10.1111/j.1468-2478.2010.00635.x

Shraddha, B., & Rajat, B. (2025). Pricing and persistence of discretionary accrual in the post-SOX era: U.S. evidence. International Journal of Auditing and Accounting Studies, 7(2), 23-46. https://DOI:10.47509/IJAAS.2025.v07i01.02

Sims, A. (2018). The rising drone threat from terrorists. Geo. J. Int’l Aff., 19, 97. https://www.jstor.org/stable/26567532

Sinkkonen, E., & Lassila, J. (2020). Digital Authoritarianism in China and Russia: Common Goals and Diverging Strandpoints in the Era of Great-power Rivalry. Finnish Institute of International Affairs. https://fiia.fi/sv/publikation/digital-authoritarianism-in-china-and-russia

Sperotto, F. (2014). The future of the American Fight against Terrorism. Rivista di Studi Politici Internazionali, 221-230. https://www.jstor.org/stable/43580643

Subedi, M. (2020). Public-private partnership and bureaucracy. Public-Private Partnership and Bureaucracy. In: Farazmand A.(eds) Global Encyclopedia of Public Administration, Public Policy, and Governance. Springer, Cham. https://doi.org/10.1007/978-3-319-31816-5_3823-1

Subedi, M. (2023). Independent Audit Matters: Mitigation of Auditors’ Independence Issues and Biases. EDPACS, 67(2), 15-23. https://doi.org/10.1080/07366981.2022.2125538

Suls, J., & Wheeler, L. (2012). Social comparison theory. Handbook of theories of social psychology, 1, 460-482. http://digital.casalini.it/9781446250068

Swango, D. (2014). The U.S. and the Role of Nuclear Co-operation and Assistance in the Design of the Non-Proliferation Treaty. The International History Review, 36(2), 210-229. https://doi.org/10.1080/07075332.2013.866152

Willett, M. (2023). The cyber dimension of the Russia–Ukraine War. In Survival: October-November 2022 (pp. 7-26). Routledge.

Wu, F., Lu, C., Zhu, M., Chen, H., Zhu, J., Yu, K., ... & Pan, Y. (2020). Towards a new generation of artificial intelligence in China. Nature Machine Intelligence, 2(6), 312-316. https://doi.org/10.1038/s42256-020-0183-4

Xiong, H., Payne, D., & Kinsella, S. (2016). Peer effects in the diffusion of innovations: Theory and simulation. Journal of behavioral and Experimental Economics, 63, 1-13. https://doi.org/10.1016/j.socec.2016.04.017

Zeng, J. (2022). Artificial intelligence with Chinese characteristics: National strategy, security and authoritarian governance. London: Palgrave Macmillan. https://doi.org/10.1007/978-981-19-0722-7

Downloads

Published

2025-10-31
Loading...